Test Report: KVM_Linux_crio 17965

                    
                      5e5f17cf679477cd200ce76c4e9747d73049443e:2024-01-16:32726
                    
                

Test fail (23/310)

Order failed test Duration
39 TestAddons/parallel/Ingress 156.08
53 TestAddons/StoppedEnableDisable 155.3
169 TestIngressAddonLegacy/serial/ValidateIngressAddons 166.94
217 TestMultiNode/serial/PingHostFrom2Pods 3.39
224 TestMultiNode/serial/RestartKeepsNodes 690.76
226 TestMultiNode/serial/StopMultiNode 143.73
233 TestPreload 200.07
292 TestStartStop/group/embed-certs/serial/Stop 140.2
294 TestStartStop/group/no-preload/serial/Stop 139.69
297 TestStartStop/group/old-k8s-version/serial/Stop 139.98
302 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.91
303 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.42
304 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.42
307 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 12.42
309 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.42
311 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 543.56
312 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 543.54
313 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 543.53
314 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.48
315 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 372.96
316 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 503.35
317 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 308.45
318 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 205.35
x
+
TestAddons/parallel/Ingress (156.08s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-690916 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-690916 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-690916 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [ad3129e2-2f54-4da9-9249-7a6219249f7b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [ad3129e2-2f54-4da9-9249-7a6219249f7b] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.004262525s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-690916 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-690916 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.063676209s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-690916 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-690916 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.234
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p addons-690916 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p addons-690916 addons disable ingress-dns --alsologtostderr -v=1: (1.367277617s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p addons-690916 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p addons-690916 addons disable ingress --alsologtostderr -v=1: (8.000377283s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-690916 -n addons-690916
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-690916 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-690916 logs -n 25: (1.464236686s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-084153                                                                     | download-only-084153 | jenkins | v1.32.0 | 16 Jan 24 02:34 UTC | 16 Jan 24 02:34 UTC |
	| delete  | -p download-only-795878                                                                     | download-only-795878 | jenkins | v1.32.0 | 16 Jan 24 02:34 UTC | 16 Jan 24 02:34 UTC |
	| delete  | -p download-only-527490                                                                     | download-only-527490 | jenkins | v1.32.0 | 16 Jan 24 02:34 UTC | 16 Jan 24 02:34 UTC |
	| delete  | -p download-only-084153                                                                     | download-only-084153 | jenkins | v1.32.0 | 16 Jan 24 02:34 UTC | 16 Jan 24 02:34 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-681946 | jenkins | v1.32.0 | 16 Jan 24 02:34 UTC |                     |
	|         | binary-mirror-681946                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:35673                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-681946                                                                     | binary-mirror-681946 | jenkins | v1.32.0 | 16 Jan 24 02:34 UTC | 16 Jan 24 02:34 UTC |
	| addons  | enable dashboard -p                                                                         | addons-690916        | jenkins | v1.32.0 | 16 Jan 24 02:34 UTC |                     |
	|         | addons-690916                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-690916        | jenkins | v1.32.0 | 16 Jan 24 02:34 UTC |                     |
	|         | addons-690916                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-690916 --wait=true                                                                | addons-690916        | jenkins | v1.32.0 | 16 Jan 24 02:34 UTC | 16 Jan 24 02:37 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --driver=kvm2                                                                 |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                                   |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-690916 ssh cat                                                                       | addons-690916        | jenkins | v1.32.0 | 16 Jan 24 02:37 UTC | 16 Jan 24 02:37 UTC |
	|         | /opt/local-path-provisioner/pvc-e5d0b07b-ee14-47a2-bd87-8d60dd23d5f0_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-690916 addons disable                                                                | addons-690916        | jenkins | v1.32.0 | 16 Jan 24 02:37 UTC | 16 Jan 24 02:38 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-690916 addons disable                                                                | addons-690916        | jenkins | v1.32.0 | 16 Jan 24 02:37 UTC | 16 Jan 24 02:37 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-690916 ip                                                                            | addons-690916        | jenkins | v1.32.0 | 16 Jan 24 02:37 UTC | 16 Jan 24 02:37 UTC |
	| addons  | addons-690916 addons disable                                                                | addons-690916        | jenkins | v1.32.0 | 16 Jan 24 02:37 UTC | 16 Jan 24 02:37 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-690916 addons                                                                        | addons-690916        | jenkins | v1.32.0 | 16 Jan 24 02:37 UTC | 16 Jan 24 02:37 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-690916        | jenkins | v1.32.0 | 16 Jan 24 02:37 UTC | 16 Jan 24 02:37 UTC |
	|         | addons-690916                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-690916 ssh curl -s                                                                   | addons-690916        | jenkins | v1.32.0 | 16 Jan 24 02:37 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-690916        | jenkins | v1.32.0 | 16 Jan 24 02:37 UTC | 16 Jan 24 02:37 UTC |
	|         | addons-690916                                                                               |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-690916        | jenkins | v1.32.0 | 16 Jan 24 02:38 UTC | 16 Jan 24 02:38 UTC |
	|         | -p addons-690916                                                                            |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-690916        | jenkins | v1.32.0 | 16 Jan 24 02:38 UTC | 16 Jan 24 02:38 UTC |
	|         | -p addons-690916                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-690916 addons                                                                        | addons-690916        | jenkins | v1.32.0 | 16 Jan 24 02:38 UTC | 16 Jan 24 02:38 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-690916 addons                                                                        | addons-690916        | jenkins | v1.32.0 | 16 Jan 24 02:38 UTC | 16 Jan 24 02:38 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-690916 ip                                                                            | addons-690916        | jenkins | v1.32.0 | 16 Jan 24 02:40 UTC | 16 Jan 24 02:40 UTC |
	| addons  | addons-690916 addons disable                                                                | addons-690916        | jenkins | v1.32.0 | 16 Jan 24 02:40 UTC | 16 Jan 24 02:40 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-690916 addons disable                                                                | addons-690916        | jenkins | v1.32.0 | 16 Jan 24 02:40 UTC | 16 Jan 24 02:40 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/16 02:34:34
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0116 02:34:34.516117  476192 out.go:296] Setting OutFile to fd 1 ...
	I0116 02:34:34.516250  476192 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:34:34.516259  476192 out.go:309] Setting ErrFile to fd 2...
	I0116 02:34:34.516264  476192 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:34:34.516483  476192 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17965-468241/.minikube/bin
	I0116 02:34:34.517122  476192 out.go:303] Setting JSON to false
	I0116 02:34:34.518021  476192 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":11827,"bootTime":1705360648,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0116 02:34:34.518092  476192 start.go:138] virtualization: kvm guest
	I0116 02:34:34.520622  476192 out.go:177] * [addons-690916] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0116 02:34:34.522285  476192 out.go:177]   - MINIKUBE_LOCATION=17965
	I0116 02:34:34.522335  476192 notify.go:220] Checking for updates...
	I0116 02:34:34.523831  476192 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 02:34:34.525677  476192 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17965-468241/kubeconfig
	I0116 02:34:34.527238  476192 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17965-468241/.minikube
	I0116 02:34:34.528688  476192 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0116 02:34:34.530101  476192 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0116 02:34:34.531911  476192 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 02:34:34.566844  476192 out.go:177] * Using the kvm2 driver based on user configuration
	I0116 02:34:34.568390  476192 start.go:298] selected driver: kvm2
	I0116 02:34:34.568412  476192 start.go:902] validating driver "kvm2" against <nil>
	I0116 02:34:34.568425  476192 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0116 02:34:34.569156  476192 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 02:34:34.569236  476192 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17965-468241/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0116 02:34:34.584795  476192 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0116 02:34:34.584893  476192 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0116 02:34:34.585129  476192 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0116 02:34:34.585182  476192 cni.go:84] Creating CNI manager for ""
	I0116 02:34:34.585194  476192 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 02:34:34.585207  476192 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0116 02:34:34.585215  476192 start_flags.go:321] config:
	{Name:addons-690916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-690916 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 02:34:34.585343  476192 iso.go:125] acquiring lock: {Name:mk2f3231f0eeeb23816ac363851489181398d1c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 02:34:34.587322  476192 out.go:177] * Starting control plane node addons-690916 in cluster addons-690916
	I0116 02:34:34.588701  476192 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 02:34:34.588762  476192 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0116 02:34:34.588775  476192 cache.go:56] Caching tarball of preloaded images
	I0116 02:34:34.588885  476192 preload.go:174] Found /home/jenkins/minikube-integration/17965-468241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0116 02:34:34.588897  476192 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0116 02:34:34.589267  476192 profile.go:148] Saving config to /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/addons-690916/config.json ...
	I0116 02:34:34.589294  476192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/addons-690916/config.json: {Name:mkaf7a83dedf983be546127dfd35c1ae44cde4f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:34:34.589459  476192 start.go:365] acquiring machines lock for addons-690916: {Name:mk901e5fae8c1a578d989b520053c709bdbdcf06 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0116 02:34:34.589505  476192 start.go:369] acquired machines lock for "addons-690916" in 31.395µs
	I0116 02:34:34.589522  476192 start.go:93] Provisioning new machine with config: &{Name:addons-690916 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:addons-690916 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 02:34:34.589583  476192 start.go:125] createHost starting for "" (driver="kvm2")
	I0116 02:34:34.591365  476192 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0116 02:34:34.591608  476192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:34:34.591675  476192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:34:34.606638  476192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33781
	I0116 02:34:34.607316  476192 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:34:34.608280  476192 main.go:141] libmachine: Using API Version  1
	I0116 02:34:34.608310  476192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:34:34.608751  476192 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:34:34.608951  476192 main.go:141] libmachine: (addons-690916) Calling .GetMachineName
	I0116 02:34:34.609120  476192 main.go:141] libmachine: (addons-690916) Calling .DriverName
	I0116 02:34:34.609359  476192 start.go:159] libmachine.API.Create for "addons-690916" (driver="kvm2")
	I0116 02:34:34.609403  476192 client.go:168] LocalClient.Create starting
	I0116 02:34:34.609456  476192 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem
	I0116 02:34:34.901655  476192 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem
	I0116 02:34:35.025030  476192 main.go:141] libmachine: Running pre-create checks...
	I0116 02:34:35.025061  476192 main.go:141] libmachine: (addons-690916) Calling .PreCreateCheck
	I0116 02:34:35.025690  476192 main.go:141] libmachine: (addons-690916) Calling .GetConfigRaw
	I0116 02:34:35.026236  476192 main.go:141] libmachine: Creating machine...
	I0116 02:34:35.026254  476192 main.go:141] libmachine: (addons-690916) Calling .Create
	I0116 02:34:35.026460  476192 main.go:141] libmachine: (addons-690916) Creating KVM machine...
	I0116 02:34:35.027711  476192 main.go:141] libmachine: (addons-690916) DBG | found existing default KVM network
	I0116 02:34:35.028640  476192 main.go:141] libmachine: (addons-690916) DBG | I0116 02:34:35.028448  476214 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f370}
	I0116 02:34:35.034554  476192 main.go:141] libmachine: (addons-690916) DBG | trying to create private KVM network mk-addons-690916 192.168.39.0/24...
	I0116 02:34:35.106868  476192 main.go:141] libmachine: (addons-690916) Setting up store path in /home/jenkins/minikube-integration/17965-468241/.minikube/machines/addons-690916 ...
	I0116 02:34:35.106908  476192 main.go:141] libmachine: (addons-690916) Building disk image from file:///home/jenkins/minikube-integration/17965-468241/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso
	I0116 02:34:35.106921  476192 main.go:141] libmachine: (addons-690916) DBG | private KVM network mk-addons-690916 192.168.39.0/24 created
	I0116 02:34:35.106948  476192 main.go:141] libmachine: (addons-690916) DBG | I0116 02:34:35.106822  476214 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17965-468241/.minikube
	I0116 02:34:35.107056  476192 main.go:141] libmachine: (addons-690916) Downloading /home/jenkins/minikube-integration/17965-468241/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17965-468241/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso...
	I0116 02:34:35.344609  476192 main.go:141] libmachine: (addons-690916) DBG | I0116 02:34:35.344428  476214 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17965-468241/.minikube/machines/addons-690916/id_rsa...
	I0116 02:34:35.574578  476192 main.go:141] libmachine: (addons-690916) DBG | I0116 02:34:35.574379  476214 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17965-468241/.minikube/machines/addons-690916/addons-690916.rawdisk...
	I0116 02:34:35.574613  476192 main.go:141] libmachine: (addons-690916) DBG | Writing magic tar header
	I0116 02:34:35.574625  476192 main.go:141] libmachine: (addons-690916) DBG | Writing SSH key tar header
	I0116 02:34:35.574633  476192 main.go:141] libmachine: (addons-690916) DBG | I0116 02:34:35.574530  476214 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17965-468241/.minikube/machines/addons-690916 ...
	I0116 02:34:35.574726  476192 main.go:141] libmachine: (addons-690916) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17965-468241/.minikube/machines/addons-690916
	I0116 02:34:35.574772  476192 main.go:141] libmachine: (addons-690916) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17965-468241/.minikube/machines
	I0116 02:34:35.574789  476192 main.go:141] libmachine: (addons-690916) Setting executable bit set on /home/jenkins/minikube-integration/17965-468241/.minikube/machines/addons-690916 (perms=drwx------)
	I0116 02:34:35.574807  476192 main.go:141] libmachine: (addons-690916) Setting executable bit set on /home/jenkins/minikube-integration/17965-468241/.minikube/machines (perms=drwxr-xr-x)
	I0116 02:34:35.574822  476192 main.go:141] libmachine: (addons-690916) Setting executable bit set on /home/jenkins/minikube-integration/17965-468241/.minikube (perms=drwxr-xr-x)
	I0116 02:34:35.574839  476192 main.go:141] libmachine: (addons-690916) Setting executable bit set on /home/jenkins/minikube-integration/17965-468241 (perms=drwxrwxr-x)
	I0116 02:34:35.574855  476192 main.go:141] libmachine: (addons-690916) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0116 02:34:35.574868  476192 main.go:141] libmachine: (addons-690916) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0116 02:34:35.574882  476192 main.go:141] libmachine: (addons-690916) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17965-468241/.minikube
	I0116 02:34:35.574897  476192 main.go:141] libmachine: (addons-690916) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17965-468241
	I0116 02:34:35.574913  476192 main.go:141] libmachine: (addons-690916) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0116 02:34:35.574923  476192 main.go:141] libmachine: (addons-690916) DBG | Checking permissions on dir: /home/jenkins
	I0116 02:34:35.574933  476192 main.go:141] libmachine: (addons-690916) DBG | Checking permissions on dir: /home
	I0116 02:34:35.574946  476192 main.go:141] libmachine: (addons-690916) DBG | Skipping /home - not owner
	I0116 02:34:35.574953  476192 main.go:141] libmachine: (addons-690916) Creating domain...
	I0116 02:34:35.575844  476192 main.go:141] libmachine: (addons-690916) define libvirt domain using xml: 
	I0116 02:34:35.575870  476192 main.go:141] libmachine: (addons-690916) <domain type='kvm'>
	I0116 02:34:35.575877  476192 main.go:141] libmachine: (addons-690916)   <name>addons-690916</name>
	I0116 02:34:35.575884  476192 main.go:141] libmachine: (addons-690916)   <memory unit='MiB'>4000</memory>
	I0116 02:34:35.575897  476192 main.go:141] libmachine: (addons-690916)   <vcpu>2</vcpu>
	I0116 02:34:35.575909  476192 main.go:141] libmachine: (addons-690916)   <features>
	I0116 02:34:35.575919  476192 main.go:141] libmachine: (addons-690916)     <acpi/>
	I0116 02:34:35.575929  476192 main.go:141] libmachine: (addons-690916)     <apic/>
	I0116 02:34:35.575935  476192 main.go:141] libmachine: (addons-690916)     <pae/>
	I0116 02:34:35.575942  476192 main.go:141] libmachine: (addons-690916)     
	I0116 02:34:35.575955  476192 main.go:141] libmachine: (addons-690916)   </features>
	I0116 02:34:35.575969  476192 main.go:141] libmachine: (addons-690916)   <cpu mode='host-passthrough'>
	I0116 02:34:35.575977  476192 main.go:141] libmachine: (addons-690916)   
	I0116 02:34:35.575985  476192 main.go:141] libmachine: (addons-690916)   </cpu>
	I0116 02:34:35.575996  476192 main.go:141] libmachine: (addons-690916)   <os>
	I0116 02:34:35.576015  476192 main.go:141] libmachine: (addons-690916)     <type>hvm</type>
	I0116 02:34:35.576028  476192 main.go:141] libmachine: (addons-690916)     <boot dev='cdrom'/>
	I0116 02:34:35.576074  476192 main.go:141] libmachine: (addons-690916)     <boot dev='hd'/>
	I0116 02:34:35.576089  476192 main.go:141] libmachine: (addons-690916)     <bootmenu enable='no'/>
	I0116 02:34:35.576113  476192 main.go:141] libmachine: (addons-690916)   </os>
	I0116 02:34:35.576121  476192 main.go:141] libmachine: (addons-690916)   <devices>
	I0116 02:34:35.576129  476192 main.go:141] libmachine: (addons-690916)     <disk type='file' device='cdrom'>
	I0116 02:34:35.576145  476192 main.go:141] libmachine: (addons-690916)       <source file='/home/jenkins/minikube-integration/17965-468241/.minikube/machines/addons-690916/boot2docker.iso'/>
	I0116 02:34:35.576162  476192 main.go:141] libmachine: (addons-690916)       <target dev='hdc' bus='scsi'/>
	I0116 02:34:35.576173  476192 main.go:141] libmachine: (addons-690916)       <readonly/>
	I0116 02:34:35.576190  476192 main.go:141] libmachine: (addons-690916)     </disk>
	I0116 02:34:35.576204  476192 main.go:141] libmachine: (addons-690916)     <disk type='file' device='disk'>
	I0116 02:34:35.576218  476192 main.go:141] libmachine: (addons-690916)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0116 02:34:35.576247  476192 main.go:141] libmachine: (addons-690916)       <source file='/home/jenkins/minikube-integration/17965-468241/.minikube/machines/addons-690916/addons-690916.rawdisk'/>
	I0116 02:34:35.576271  476192 main.go:141] libmachine: (addons-690916)       <target dev='hda' bus='virtio'/>
	I0116 02:34:35.576279  476192 main.go:141] libmachine: (addons-690916)     </disk>
	I0116 02:34:35.576284  476192 main.go:141] libmachine: (addons-690916)     <interface type='network'>
	I0116 02:34:35.576294  476192 main.go:141] libmachine: (addons-690916)       <source network='mk-addons-690916'/>
	I0116 02:34:35.576300  476192 main.go:141] libmachine: (addons-690916)       <model type='virtio'/>
	I0116 02:34:35.576308  476192 main.go:141] libmachine: (addons-690916)     </interface>
	I0116 02:34:35.576314  476192 main.go:141] libmachine: (addons-690916)     <interface type='network'>
	I0116 02:34:35.576322  476192 main.go:141] libmachine: (addons-690916)       <source network='default'/>
	I0116 02:34:35.576327  476192 main.go:141] libmachine: (addons-690916)       <model type='virtio'/>
	I0116 02:34:35.576337  476192 main.go:141] libmachine: (addons-690916)     </interface>
	I0116 02:34:35.576342  476192 main.go:141] libmachine: (addons-690916)     <serial type='pty'>
	I0116 02:34:35.576349  476192 main.go:141] libmachine: (addons-690916)       <target port='0'/>
	I0116 02:34:35.576370  476192 main.go:141] libmachine: (addons-690916)     </serial>
	I0116 02:34:35.576379  476192 main.go:141] libmachine: (addons-690916)     <console type='pty'>
	I0116 02:34:35.576385  476192 main.go:141] libmachine: (addons-690916)       <target type='serial' port='0'/>
	I0116 02:34:35.576393  476192 main.go:141] libmachine: (addons-690916)     </console>
	I0116 02:34:35.576398  476192 main.go:141] libmachine: (addons-690916)     <rng model='virtio'>
	I0116 02:34:35.576408  476192 main.go:141] libmachine: (addons-690916)       <backend model='random'>/dev/random</backend>
	I0116 02:34:35.576413  476192 main.go:141] libmachine: (addons-690916)     </rng>
	I0116 02:34:35.576420  476192 main.go:141] libmachine: (addons-690916)     
	I0116 02:34:35.576425  476192 main.go:141] libmachine: (addons-690916)     
	I0116 02:34:35.576452  476192 main.go:141] libmachine: (addons-690916)   </devices>
	I0116 02:34:35.576471  476192 main.go:141] libmachine: (addons-690916) </domain>
	I0116 02:34:35.576485  476192 main.go:141] libmachine: (addons-690916) 
	I0116 02:34:35.582939  476192 main.go:141] libmachine: (addons-690916) DBG | domain addons-690916 has defined MAC address 52:54:00:f0:66:60 in network default
	I0116 02:34:35.583476  476192 main.go:141] libmachine: (addons-690916) Ensuring networks are active...
	I0116 02:34:35.583494  476192 main.go:141] libmachine: (addons-690916) DBG | domain addons-690916 has defined MAC address 52:54:00:5c:a7:a7 in network mk-addons-690916
	I0116 02:34:35.584163  476192 main.go:141] libmachine: (addons-690916) Ensuring network default is active
	I0116 02:34:35.584496  476192 main.go:141] libmachine: (addons-690916) Ensuring network mk-addons-690916 is active
	I0116 02:34:35.584931  476192 main.go:141] libmachine: (addons-690916) Getting domain xml...
	I0116 02:34:35.585590  476192 main.go:141] libmachine: (addons-690916) Creating domain...
	I0116 02:34:36.092832  476192 main.go:141] libmachine: (addons-690916) Waiting to get IP...
	I0116 02:34:36.093714  476192 main.go:141] libmachine: (addons-690916) DBG | domain addons-690916 has defined MAC address 52:54:00:5c:a7:a7 in network mk-addons-690916
	I0116 02:34:36.094189  476192 main.go:141] libmachine: (addons-690916) DBG | unable to find current IP address of domain addons-690916 in network mk-addons-690916
	I0116 02:34:36.094221  476192 main.go:141] libmachine: (addons-690916) DBG | I0116 02:34:36.094146  476214 retry.go:31] will retry after 227.424207ms: waiting for machine to come up
	I0116 02:34:36.323583  476192 main.go:141] libmachine: (addons-690916) DBG | domain addons-690916 has defined MAC address 52:54:00:5c:a7:a7 in network mk-addons-690916
	I0116 02:34:36.324001  476192 main.go:141] libmachine: (addons-690916) DBG | unable to find current IP address of domain addons-690916 in network mk-addons-690916
	I0116 02:34:36.324066  476192 main.go:141] libmachine: (addons-690916) DBG | I0116 02:34:36.323982  476214 retry.go:31] will retry after 364.108248ms: waiting for machine to come up
	I0116 02:34:36.689646  476192 main.go:141] libmachine: (addons-690916) DBG | domain addons-690916 has defined MAC address 52:54:00:5c:a7:a7 in network mk-addons-690916
	I0116 02:34:36.690046  476192 main.go:141] libmachine: (addons-690916) DBG | unable to find current IP address of domain addons-690916 in network mk-addons-690916
	I0116 02:34:36.690079  476192 main.go:141] libmachine: (addons-690916) DBG | I0116 02:34:36.690009  476214 retry.go:31] will retry after 384.976046ms: waiting for machine to come up
	I0116 02:34:37.076509  476192 main.go:141] libmachine: (addons-690916) DBG | domain addons-690916 has defined MAC address 52:54:00:5c:a7:a7 in network mk-addons-690916
	I0116 02:34:37.076972  476192 main.go:141] libmachine: (addons-690916) DBG | unable to find current IP address of domain addons-690916 in network mk-addons-690916
	I0116 02:34:37.077005  476192 main.go:141] libmachine: (addons-690916) DBG | I0116 02:34:37.076922  476214 retry.go:31] will retry after 438.877925ms: waiting for machine to come up
	I0116 02:34:37.517558  476192 main.go:141] libmachine: (addons-690916) DBG | domain addons-690916 has defined MAC address 52:54:00:5c:a7:a7 in network mk-addons-690916
	I0116 02:34:37.518093  476192 main.go:141] libmachine: (addons-690916) DBG | unable to find current IP address of domain addons-690916 in network mk-addons-690916
	I0116 02:34:37.518128  476192 main.go:141] libmachine: (addons-690916) DBG | I0116 02:34:37.518030  476214 retry.go:31] will retry after 526.839162ms: waiting for machine to come up
	I0116 02:34:38.046731  476192 main.go:141] libmachine: (addons-690916) DBG | domain addons-690916 has defined MAC address 52:54:00:5c:a7:a7 in network mk-addons-690916
	I0116 02:34:38.047157  476192 main.go:141] libmachine: (addons-690916) DBG | unable to find current IP address of domain addons-690916 in network mk-addons-690916
	I0116 02:34:38.047184  476192 main.go:141] libmachine: (addons-690916) DBG | I0116 02:34:38.047104  476214 retry.go:31] will retry after 805.784778ms: waiting for machine to come up
	I0116 02:34:38.854286  476192 main.go:141] libmachine: (addons-690916) DBG | domain addons-690916 has defined MAC address 52:54:00:5c:a7:a7 in network mk-addons-690916
	I0116 02:34:38.854698  476192 main.go:141] libmachine: (addons-690916) DBG | unable to find current IP address of domain addons-690916 in network mk-addons-690916
	I0116 02:34:38.854733  476192 main.go:141] libmachine: (addons-690916) DBG | I0116 02:34:38.854645  476214 retry.go:31] will retry after 813.68281ms: waiting for machine to come up
	I0116 02:34:39.670400  476192 main.go:141] libmachine: (addons-690916) DBG | domain addons-690916 has defined MAC address 52:54:00:5c:a7:a7 in network mk-addons-690916
	I0116 02:34:39.670885  476192 main.go:141] libmachine: (addons-690916) DBG | unable to find current IP address of domain addons-690916 in network mk-addons-690916
	I0116 02:34:39.670918  476192 main.go:141] libmachine: (addons-690916) DBG | I0116 02:34:39.670810  476214 retry.go:31] will retry after 1.342883051s: waiting for machine to come up
	I0116 02:34:41.015644  476192 main.go:141] libmachine: (addons-690916) DBG | domain addons-690916 has defined MAC address 52:54:00:5c:a7:a7 in network mk-addons-690916
	I0116 02:34:41.016174  476192 main.go:141] libmachine: (addons-690916) DBG | unable to find current IP address of domain addons-690916 in network mk-addons-690916
	I0116 02:34:41.016224  476192 main.go:141] libmachine: (addons-690916) DBG | I0116 02:34:41.016115  476214 retry.go:31] will retry after 1.315599815s: waiting for machine to come up
	I0116 02:34:42.333536  476192 main.go:141] libmachine: (addons-690916) DBG | domain addons-690916 has defined MAC address 52:54:00:5c:a7:a7 in network mk-addons-690916
	I0116 02:34:42.333932  476192 main.go:141] libmachine: (addons-690916) DBG | unable to find current IP address of domain addons-690916 in network mk-addons-690916
	I0116 02:34:42.333975  476192 main.go:141] libmachine: (addons-690916) DBG | I0116 02:34:42.333887  476214 retry.go:31] will retry after 1.883678445s: waiting for machine to come up
	I0116 02:34:44.220145  476192 main.go:141] libmachine: (addons-690916) DBG | domain addons-690916 has defined MAC address 52:54:00:5c:a7:a7 in network mk-addons-690916
	I0116 02:34:44.220629  476192 main.go:141] libmachine: (addons-690916) DBG | unable to find current IP address of domain addons-690916 in network mk-addons-690916
	I0116 02:34:44.220660  476192 main.go:141] libmachine: (addons-690916) DBG | I0116 02:34:44.220583  476214 retry.go:31] will retry after 2.013255165s: waiting for machine to come up
	I0116 02:34:46.235894  476192 main.go:141] libmachine: (addons-690916) DBG | domain addons-690916 has defined MAC address 52:54:00:5c:a7:a7 in network mk-addons-690916
	I0116 02:34:46.236377  476192 main.go:141] libmachine: (addons-690916) DBG | unable to find current IP address of domain addons-690916 in network mk-addons-690916
	I0116 02:34:46.236409  476192 main.go:141] libmachine: (addons-690916) DBG | I0116 02:34:46.236321  476214 retry.go:31] will retry after 3.152367223s: waiting for machine to come up
	I0116 02:34:49.392719  476192 main.go:141] libmachine: (addons-690916) DBG | domain addons-690916 has defined MAC address 52:54:00:5c:a7:a7 in network mk-addons-690916
	I0116 02:34:49.393127  476192 main.go:141] libmachine: (addons-690916) DBG | unable to find current IP address of domain addons-690916 in network mk-addons-690916
	I0116 02:34:49.393166  476192 main.go:141] libmachine: (addons-690916) DBG | I0116 02:34:49.393062  476214 retry.go:31] will retry after 3.108226486s: waiting for machine to come up
	I0116 02:34:52.504748  476192 main.go:141] libmachine: (addons-690916) DBG | domain addons-690916 has defined MAC address 52:54:00:5c:a7:a7 in network mk-addons-690916
	I0116 02:34:52.505190  476192 main.go:141] libmachine: (addons-690916) DBG | unable to find current IP address of domain addons-690916 in network mk-addons-690916
	I0116 02:34:52.505216  476192 main.go:141] libmachine: (addons-690916) DBG | I0116 02:34:52.505156  476214 retry.go:31] will retry after 4.251411575s: waiting for machine to come up
	I0116 02:34:56.757865  476192 main.go:141] libmachine: (addons-690916) DBG | domain addons-690916 has defined MAC address 52:54:00:5c:a7:a7 in network mk-addons-690916
	I0116 02:34:56.758311  476192 main.go:141] libmachine: (addons-690916) DBG | domain addons-690916 has current primary IP address 192.168.39.234 and MAC address 52:54:00:5c:a7:a7 in network mk-addons-690916
	I0116 02:34:56.758355  476192 main.go:141] libmachine: (addons-690916) Found IP for machine: 192.168.39.234
	I0116 02:34:56.758369  476192 main.go:141] libmachine: (addons-690916) Reserving static IP address...
	I0116 02:34:56.758749  476192 main.go:141] libmachine: (addons-690916) DBG | unable to find host DHCP lease matching {name: "addons-690916", mac: "52:54:00:5c:a7:a7", ip: "192.168.39.234"} in network mk-addons-690916
	I0116 02:34:56.832655  476192 main.go:141] libmachine: (addons-690916) DBG | Getting to WaitForSSH function...
	I0116 02:34:56.832690  476192 main.go:141] libmachine: (addons-690916) Reserved static IP address: 192.168.39.234
	I0116 02:34:56.832704  476192 main.go:141] libmachine: (addons-690916) Waiting for SSH to be available...
	I0116 02:34:56.835475  476192 main.go:141] libmachine: (addons-690916) DBG | domain addons-690916 has defined MAC address 52:54:00:5c:a7:a7 in network mk-addons-690916
	I0116 02:34:56.835991  476192 main.go:141] libmachine: (addons-690916) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:a7", ip: ""} in network mk-addons-690916: {Iface:virbr1 ExpiryTime:2024-01-16 03:34:50 +0000 UTC Type:0 Mac:52:54:00:5c:a7:a7 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:minikube Clientid:01:52:54:00:5c:a7:a7}
	I0116 02:34:56.836021  476192 main.go:141] libmachine: (addons-690916) DBG | domain addons-690916 has defined IP address 192.168.39.234 and MAC address 52:54:00:5c:a7:a7 in network mk-addons-690916
	I0116 02:34:56.836176  476192 main.go:141] libmachine: (addons-690916) DBG | Using SSH client type: external
	I0116 02:34:56.836212  476192 main.go:141] libmachine: (addons-690916) DBG | Using SSH private key: /home/jenkins/minikube-integration/17965-468241/.minikube/machines/addons-690916/id_rsa (-rw-------)
	I0116 02:34:56.836243  476192 main.go:141] libmachine: (addons-690916) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.234 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17965-468241/.minikube/machines/addons-690916/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0116 02:34:56.836261  476192 main.go:141] libmachine: (addons-690916) DBG | About to run SSH command:
	I0116 02:34:56.836276  476192 main.go:141] libmachine: (addons-690916) DBG | exit 0
	I0116 02:34:56.924625  476192 main.go:141] libmachine: (addons-690916) DBG | SSH cmd err, output: <nil>: 
	I0116 02:34:56.924945  476192 main.go:141] libmachine: (addons-690916) KVM machine creation complete!
	I0116 02:34:56.925231  476192 main.go:141] libmachine: (addons-690916) Calling .GetConfigRaw
	I0116 02:34:56.925777  476192 main.go:141] libmachine: (addons-690916) Calling .DriverName
	I0116 02:34:56.925971  476192 main.go:141] libmachine: (addons-690916) Calling .DriverName
	I0116 02:34:56.926153  476192 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0116 02:34:56.926166  476192 main.go:141] libmachine: (addons-690916) Calling .GetState
	I0116 02:34:56.927574  476192 main.go:141] libmachine: Detecting operating system of created instance...
	I0116 02:34:56.927596  476192 main.go:141] libmachine: Waiting for SSH to be available...
	I0116 02:34:56.927604  476192 main.go:141] libmachine: Getting to WaitForSSH function...
	I0116 02:34:56.927614  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHHostname
	I0116 02:34:56.929882  476192 main.go:141] libmachine: (addons-690916) DBG | domain addons-690916 has defined MAC address 52:54:00:5c:a7:a7 in network mk-addons-690916
	I0116 02:34:56.930206  476192 main.go:141] libmachine: (addons-690916) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:a7", ip: ""} in network mk-addons-690916: {Iface:virbr1 ExpiryTime:2024-01-16 03:34:50 +0000 UTC Type:0 Mac:52:54:00:5c:a7:a7 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-690916 Clientid:01:52:54:00:5c:a7:a7}
	I0116 02:34:56.930234  476192 main.go:141] libmachine: (addons-690916) DBG | domain addons-690916 has defined IP address 192.168.39.234 and MAC address 52:54:00:5c:a7:a7 in network mk-addons-690916
	I0116 02:34:56.930412  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHPort
	I0116 02:34:56.930606  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHKeyPath
	I0116 02:34:56.930782  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHKeyPath
	I0116 02:34:56.931050  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHUsername
	I0116 02:34:56.931237  476192 main.go:141] libmachine: Using SSH client type: native
	I0116 02:34:56.931606  476192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0116 02:34:56.931618  476192 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0116 02:34:57.043410  476192 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 02:34:57.043433  476192 main.go:141] libmachine: Detecting the provisioner...
	I0116 02:34:57.043442  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHHostname
	I0116 02:34:57.046116  476192 main.go:141] libmachine: (addons-690916) DBG | domain addons-690916 has defined MAC address 52:54:00:5c:a7:a7 in network mk-addons-690916
	I0116 02:34:57.046512  476192 main.go:141] libmachine: (addons-690916) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:a7", ip: ""} in network mk-addons-690916: {Iface:virbr1 ExpiryTime:2024-01-16 03:34:50 +0000 UTC Type:0 Mac:52:54:00:5c:a7:a7 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-690916 Clientid:01:52:54:00:5c:a7:a7}
	I0116 02:34:57.046549  476192 main.go:141] libmachine: (addons-690916) DBG | domain addons-690916 has defined IP address 192.168.39.234 and MAC address 52:54:00:5c:a7:a7 in network mk-addons-690916
	I0116 02:34:57.046696  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHPort
	I0116 02:34:57.046911  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHKeyPath
	I0116 02:34:57.047135  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHKeyPath
	I0116 02:34:57.047276  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHUsername
	I0116 02:34:57.047438  476192 main.go:141] libmachine: Using SSH client type: native
	I0116 02:34:57.047778  476192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0116 02:34:57.047793  476192 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0116 02:34:57.156831  476192 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g19d536a-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0116 02:34:57.156956  476192 main.go:141] libmachine: found compatible host: buildroot
	I0116 02:34:57.156975  476192 main.go:141] libmachine: Provisioning with buildroot...
	I0116 02:34:57.156987  476192 main.go:141] libmachine: (addons-690916) Calling .GetMachineName
	I0116 02:34:57.157280  476192 buildroot.go:166] provisioning hostname "addons-690916"
	I0116 02:34:57.157314  476192 main.go:141] libmachine: (addons-690916) Calling .GetMachineName
	I0116 02:34:57.157481  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHHostname
	I0116 02:34:57.160167  476192 main.go:141] libmachine: (addons-690916) DBG | domain addons-690916 has defined MAC address 52:54:00:5c:a7:a7 in network mk-addons-690916
	I0116 02:34:57.160479  476192 main.go:141] libmachine: (addons-690916) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:a7", ip: ""} in network mk-addons-690916: {Iface:virbr1 ExpiryTime:2024-01-16 03:34:50 +0000 UTC Type:0 Mac:52:54:00:5c:a7:a7 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-690916 Clientid:01:52:54:00:5c:a7:a7}
	I0116 02:34:57.160521  476192 main.go:141] libmachine: (addons-690916) DBG | domain addons-690916 has defined IP address 192.168.39.234 and MAC address 52:54:00:5c:a7:a7 in network mk-addons-690916
	I0116 02:34:57.160639  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHPort
	I0116 02:34:57.160869  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHKeyPath
	I0116 02:34:57.161047  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHKeyPath
	I0116 02:34:57.161196  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHUsername
	I0116 02:34:57.161382  476192 main.go:141] libmachine: Using SSH client type: native
	I0116 02:34:57.161884  476192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0116 02:34:57.161907  476192 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-690916 && echo "addons-690916" | sudo tee /etc/hostname
	I0116 02:34:57.285088  476192 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-690916
	
	I0116 02:34:57.285118  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHHostname
	I0116 02:34:57.288028  476192 main.go:141] libmachine: (addons-690916) DBG | domain addons-690916 has defined MAC address 52:54:00:5c:a7:a7 in network mk-addons-690916
	I0116 02:34:57.288530  476192 main.go:141] libmachine: (addons-690916) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:a7", ip: ""} in network mk-addons-690916: {Iface:virbr1 ExpiryTime:2024-01-16 03:34:50 +0000 UTC Type:0 Mac:52:54:00:5c:a7:a7 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-690916 Clientid:01:52:54:00:5c:a7:a7}
	I0116 02:34:57.288568  476192 main.go:141] libmachine: (addons-690916) DBG | domain addons-690916 has defined IP address 192.168.39.234 and MAC address 52:54:00:5c:a7:a7 in network mk-addons-690916
	I0116 02:34:57.288810  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHPort
	I0116 02:34:57.289029  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHKeyPath
	I0116 02:34:57.289168  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHKeyPath
	I0116 02:34:57.289302  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHUsername
	I0116 02:34:57.289492  476192 main.go:141] libmachine: Using SSH client type: native
	I0116 02:34:57.289824  476192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0116 02:34:57.289841  476192 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-690916' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-690916/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-690916' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 02:34:57.411746  476192 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 02:34:57.411784  476192 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17965-468241/.minikube CaCertPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17965-468241/.minikube}
	I0116 02:34:57.411821  476192 buildroot.go:174] setting up certificates
	I0116 02:34:57.411839  476192 provision.go:83] configureAuth start
	I0116 02:34:57.411860  476192 main.go:141] libmachine: (addons-690916) Calling .GetMachineName
	I0116 02:34:57.412217  476192 main.go:141] libmachine: (addons-690916) Calling .GetIP
	I0116 02:34:57.415293  476192 main.go:141] libmachine: (addons-690916) DBG | domain addons-690916 has defined MAC address 52:54:00:5c:a7:a7 in network mk-addons-690916
	I0116 02:34:57.415701  476192 main.go:141] libmachine: (addons-690916) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:a7", ip: ""} in network mk-addons-690916: {Iface:virbr1 ExpiryTime:2024-01-16 03:34:50 +0000 UTC Type:0 Mac:52:54:00:5c:a7:a7 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-690916 Clientid:01:52:54:00:5c:a7:a7}
	I0116 02:34:57.415741  476192 main.go:141] libmachine: (addons-690916) DBG | domain addons-690916 has defined IP address 192.168.39.234 and MAC address 52:54:00:5c:a7:a7 in network mk-addons-690916
	I0116 02:34:57.415932  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHHostname
	I0116 02:34:57.418653  476192 main.go:141] libmachine: (addons-690916) DBG | domain addons-690916 has defined MAC address 52:54:00:5c:a7:a7 in network mk-addons-690916
	I0116 02:34:57.419065  476192 main.go:141] libmachine: (addons-690916) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:a7", ip: ""} in network mk-addons-690916: {Iface:virbr1 ExpiryTime:2024-01-16 03:34:50 +0000 UTC Type:0 Mac:52:54:00:5c:a7:a7 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-690916 Clientid:01:52:54:00:5c:a7:a7}
	I0116 02:34:57.419093  476192 main.go:141] libmachine: (addons-690916) DBG | domain addons-690916 has defined IP address 192.168.39.234 and MAC address 52:54:00:5c:a7:a7 in network mk-addons-690916
	I0116 02:34:57.419250  476192 provision.go:138] copyHostCerts
	I0116 02:34:57.419350  476192 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem (1078 bytes)
	I0116 02:34:57.419487  476192 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem (1123 bytes)
	I0116 02:34:57.419590  476192 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem (1679 bytes)
	I0116 02:34:57.419650  476192 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca-key.pem org=jenkins.addons-690916 san=[192.168.39.234 192.168.39.234 localhost 127.0.0.1 minikube addons-690916]
	I0116 02:34:57.499073  476192 provision.go:172] copyRemoteCerts
	I0116 02:34:57.499140  476192 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 02:34:57.499166  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHHostname
	I0116 02:34:57.501781  476192 main.go:141] libmachine: (addons-690916) DBG | domain addons-690916 has defined MAC address 52:54:00:5c:a7:a7 in network mk-addons-690916
	I0116 02:34:57.502116  476192 main.go:141] libmachine: (addons-690916) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:a7", ip: ""} in network mk-addons-690916: {Iface:virbr1 ExpiryTime:2024-01-16 03:34:50 +0000 UTC Type:0 Mac:52:54:00:5c:a7:a7 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-690916 Clientid:01:52:54:00:5c:a7:a7}
	I0116 02:34:57.502144  476192 main.go:141] libmachine: (addons-690916) DBG | domain addons-690916 has defined IP address 192.168.39.234 and MAC address 52:54:00:5c:a7:a7 in network mk-addons-690916
	I0116 02:34:57.502324  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHPort
	I0116 02:34:57.502522  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHKeyPath
	I0116 02:34:57.502670  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHUsername
	I0116 02:34:57.502828  476192 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/addons-690916/id_rsa Username:docker}
	I0116 02:34:57.589467  476192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0116 02:34:57.615354  476192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0116 02:34:57.640884  476192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0116 02:34:57.665995  476192 provision.go:86] duration metric: configureAuth took 254.133358ms
	I0116 02:34:57.666040  476192 buildroot.go:189] setting minikube options for container-runtime
	I0116 02:34:57.666370  476192 config.go:182] Loaded profile config "addons-690916": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 02:34:57.666502  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHHostname
	I0116 02:34:57.669163  476192 main.go:141] libmachine: (addons-690916) DBG | domain addons-690916 has defined MAC address 52:54:00:5c:a7:a7 in network mk-addons-690916
	I0116 02:34:57.669526  476192 main.go:141] libmachine: (addons-690916) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:a7", ip: ""} in network mk-addons-690916: {Iface:virbr1 ExpiryTime:2024-01-16 03:34:50 +0000 UTC Type:0 Mac:52:54:00:5c:a7:a7 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-690916 Clientid:01:52:54:00:5c:a7:a7}
	I0116 02:34:57.669556  476192 main.go:141] libmachine: (addons-690916) DBG | domain addons-690916 has defined IP address 192.168.39.234 and MAC address 52:54:00:5c:a7:a7 in network mk-addons-690916
	I0116 02:34:57.669769  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHPort
	I0116 02:34:57.669993  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHKeyPath
	I0116 02:34:57.670198  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHKeyPath
	I0116 02:34:57.670368  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHUsername
	I0116 02:34:57.670571  476192 main.go:141] libmachine: Using SSH client type: native
	I0116 02:34:57.670890  476192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0116 02:34:57.670906  476192 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 02:34:57.980970  476192 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 02:34:57.981006  476192 main.go:141] libmachine: Checking connection to Docker...
	I0116 02:34:57.981029  476192 main.go:141] libmachine: (addons-690916) Calling .GetURL
	I0116 02:34:57.982431  476192 main.go:141] libmachine: (addons-690916) DBG | Using libvirt version 6000000
	I0116 02:34:57.985569  476192 main.go:141] libmachine: (addons-690916) DBG | domain addons-690916 has defined MAC address 52:54:00:5c:a7:a7 in network mk-addons-690916
	I0116 02:34:57.985976  476192 main.go:141] libmachine: (addons-690916) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:a7", ip: ""} in network mk-addons-690916: {Iface:virbr1 ExpiryTime:2024-01-16 03:34:50 +0000 UTC Type:0 Mac:52:54:00:5c:a7:a7 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-690916 Clientid:01:52:54:00:5c:a7:a7}
	I0116 02:34:57.986002  476192 main.go:141] libmachine: (addons-690916) DBG | domain addons-690916 has defined IP address 192.168.39.234 and MAC address 52:54:00:5c:a7:a7 in network mk-addons-690916
	I0116 02:34:57.986191  476192 main.go:141] libmachine: Docker is up and running!
	I0116 02:34:57.986208  476192 main.go:141] libmachine: Reticulating splines...
	I0116 02:34:57.986216  476192 client.go:171] LocalClient.Create took 23.376799406s
	I0116 02:34:57.986238  476192 start.go:167] duration metric: libmachine.API.Create for "addons-690916" took 23.376882603s
	I0116 02:34:57.986280  476192 start.go:300] post-start starting for "addons-690916" (driver="kvm2")
	I0116 02:34:57.986294  476192 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 02:34:57.986312  476192 main.go:141] libmachine: (addons-690916) Calling .DriverName
	I0116 02:34:57.986610  476192 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 02:34:57.986635  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHHostname
	I0116 02:34:57.989096  476192 main.go:141] libmachine: (addons-690916) DBG | domain addons-690916 has defined MAC address 52:54:00:5c:a7:a7 in network mk-addons-690916
	I0116 02:34:57.989456  476192 main.go:141] libmachine: (addons-690916) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:a7", ip: ""} in network mk-addons-690916: {Iface:virbr1 ExpiryTime:2024-01-16 03:34:50 +0000 UTC Type:0 Mac:52:54:00:5c:a7:a7 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-690916 Clientid:01:52:54:00:5c:a7:a7}
	I0116 02:34:57.989493  476192 main.go:141] libmachine: (addons-690916) DBG | domain addons-690916 has defined IP address 192.168.39.234 and MAC address 52:54:00:5c:a7:a7 in network mk-addons-690916
	I0116 02:34:57.989651  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHPort
	I0116 02:34:57.989841  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHKeyPath
	I0116 02:34:57.990055  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHUsername
	I0116 02:34:57.990251  476192 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/addons-690916/id_rsa Username:docker}
	I0116 02:34:58.081121  476192 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 02:34:58.086163  476192 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 02:34:58.086203  476192 filesync.go:126] Scanning /home/jenkins/minikube-integration/17965-468241/.minikube/addons for local assets ...
	I0116 02:34:58.086297  476192 filesync.go:126] Scanning /home/jenkins/minikube-integration/17965-468241/.minikube/files for local assets ...
	I0116 02:34:58.086328  476192 start.go:303] post-start completed in 100.038979ms
	I0116 02:34:58.086375  476192 main.go:141] libmachine: (addons-690916) Calling .GetConfigRaw
	I0116 02:34:58.086995  476192 main.go:141] libmachine: (addons-690916) Calling .GetIP
	I0116 02:34:58.090076  476192 main.go:141] libmachine: (addons-690916) DBG | domain addons-690916 has defined MAC address 52:54:00:5c:a7:a7 in network mk-addons-690916
	I0116 02:34:58.090485  476192 main.go:141] libmachine: (addons-690916) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:a7", ip: ""} in network mk-addons-690916: {Iface:virbr1 ExpiryTime:2024-01-16 03:34:50 +0000 UTC Type:0 Mac:52:54:00:5c:a7:a7 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-690916 Clientid:01:52:54:00:5c:a7:a7}
	I0116 02:34:58.090510  476192 main.go:141] libmachine: (addons-690916) DBG | domain addons-690916 has defined IP address 192.168.39.234 and MAC address 52:54:00:5c:a7:a7 in network mk-addons-690916
	I0116 02:34:58.090851  476192 profile.go:148] Saving config to /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/addons-690916/config.json ...
	I0116 02:34:58.091067  476192 start.go:128] duration metric: createHost completed in 23.501472765s
	I0116 02:34:58.091103  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHHostname
	I0116 02:34:58.093216  476192 main.go:141] libmachine: (addons-690916) DBG | domain addons-690916 has defined MAC address 52:54:00:5c:a7:a7 in network mk-addons-690916
	I0116 02:34:58.093560  476192 main.go:141] libmachine: (addons-690916) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:a7", ip: ""} in network mk-addons-690916: {Iface:virbr1 ExpiryTime:2024-01-16 03:34:50 +0000 UTC Type:0 Mac:52:54:00:5c:a7:a7 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-690916 Clientid:01:52:54:00:5c:a7:a7}
	I0116 02:34:58.093588  476192 main.go:141] libmachine: (addons-690916) DBG | domain addons-690916 has defined IP address 192.168.39.234 and MAC address 52:54:00:5c:a7:a7 in network mk-addons-690916
	I0116 02:34:58.093725  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHPort
	I0116 02:34:58.093932  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHKeyPath
	I0116 02:34:58.094202  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHKeyPath
	I0116 02:34:58.094377  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHUsername
	I0116 02:34:58.094573  476192 main.go:141] libmachine: Using SSH client type: native
	I0116 02:34:58.094904  476192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0116 02:34:58.094916  476192 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0116 02:34:58.209140  476192 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705372498.185645136
	
	I0116 02:34:58.209167  476192 fix.go:206] guest clock: 1705372498.185645136
	I0116 02:34:58.209175  476192 fix.go:219] Guest: 2024-01-16 02:34:58.185645136 +0000 UTC Remote: 2024-01-16 02:34:58.091080886 +0000 UTC m=+23.627084231 (delta=94.56425ms)
	I0116 02:34:58.209196  476192 fix.go:190] guest clock delta is within tolerance: 94.56425ms
	I0116 02:34:58.209202  476192 start.go:83] releasing machines lock for "addons-690916", held for 23.619688031s
	I0116 02:34:58.209229  476192 main.go:141] libmachine: (addons-690916) Calling .DriverName
	I0116 02:34:58.209518  476192 main.go:141] libmachine: (addons-690916) Calling .GetIP
	I0116 02:34:58.212168  476192 main.go:141] libmachine: (addons-690916) DBG | domain addons-690916 has defined MAC address 52:54:00:5c:a7:a7 in network mk-addons-690916
	I0116 02:34:58.212531  476192 main.go:141] libmachine: (addons-690916) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:a7", ip: ""} in network mk-addons-690916: {Iface:virbr1 ExpiryTime:2024-01-16 03:34:50 +0000 UTC Type:0 Mac:52:54:00:5c:a7:a7 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-690916 Clientid:01:52:54:00:5c:a7:a7}
	I0116 02:34:58.212564  476192 main.go:141] libmachine: (addons-690916) DBG | domain addons-690916 has defined IP address 192.168.39.234 and MAC address 52:54:00:5c:a7:a7 in network mk-addons-690916
	I0116 02:34:58.212773  476192 main.go:141] libmachine: (addons-690916) Calling .DriverName
	I0116 02:34:58.213285  476192 main.go:141] libmachine: (addons-690916) Calling .DriverName
	I0116 02:34:58.213586  476192 main.go:141] libmachine: (addons-690916) Calling .DriverName
	I0116 02:34:58.213728  476192 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 02:34:58.213791  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHHostname
	I0116 02:34:58.213879  476192 ssh_runner.go:195] Run: cat /version.json
	I0116 02:34:58.213903  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHHostname
	I0116 02:34:58.216413  476192 main.go:141] libmachine: (addons-690916) DBG | domain addons-690916 has defined MAC address 52:54:00:5c:a7:a7 in network mk-addons-690916
	I0116 02:34:58.216698  476192 main.go:141] libmachine: (addons-690916) DBG | domain addons-690916 has defined MAC address 52:54:00:5c:a7:a7 in network mk-addons-690916
	I0116 02:34:58.216735  476192 main.go:141] libmachine: (addons-690916) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:a7", ip: ""} in network mk-addons-690916: {Iface:virbr1 ExpiryTime:2024-01-16 03:34:50 +0000 UTC Type:0 Mac:52:54:00:5c:a7:a7 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-690916 Clientid:01:52:54:00:5c:a7:a7}
	I0116 02:34:58.216761  476192 main.go:141] libmachine: (addons-690916) DBG | domain addons-690916 has defined IP address 192.168.39.234 and MAC address 52:54:00:5c:a7:a7 in network mk-addons-690916
	I0116 02:34:58.216964  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHPort
	I0116 02:34:58.217190  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHKeyPath
	I0116 02:34:58.217277  476192 main.go:141] libmachine: (addons-690916) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:a7", ip: ""} in network mk-addons-690916: {Iface:virbr1 ExpiryTime:2024-01-16 03:34:50 +0000 UTC Type:0 Mac:52:54:00:5c:a7:a7 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-690916 Clientid:01:52:54:00:5c:a7:a7}
	I0116 02:34:58.217305  476192 main.go:141] libmachine: (addons-690916) DBG | domain addons-690916 has defined IP address 192.168.39.234 and MAC address 52:54:00:5c:a7:a7 in network mk-addons-690916
	I0116 02:34:58.217372  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHUsername
	I0116 02:34:58.217457  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHPort
	I0116 02:34:58.217667  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHKeyPath
	I0116 02:34:58.217692  476192 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/addons-690916/id_rsa Username:docker}
	I0116 02:34:58.217825  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHUsername
	I0116 02:34:58.217987  476192 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/addons-690916/id_rsa Username:docker}
	I0116 02:34:58.297362  476192 ssh_runner.go:195] Run: systemctl --version
	I0116 02:34:58.326032  476192 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 02:34:59.028896  476192 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0116 02:34:59.035353  476192 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 02:34:59.035441  476192 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 02:34:59.052966  476192 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0116 02:34:59.052995  476192 start.go:475] detecting cgroup driver to use...
	I0116 02:34:59.053089  476192 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 02:34:59.068555  476192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 02:34:59.082666  476192 docker.go:217] disabling cri-docker service (if available) ...
	I0116 02:34:59.082734  476192 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 02:34:59.096772  476192 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 02:34:59.111200  476192 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 02:34:59.224986  476192 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 02:34:59.350770  476192 docker.go:233] disabling docker service ...
	I0116 02:34:59.350856  476192 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 02:34:59.365372  476192 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 02:34:59.378634  476192 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 02:34:59.494967  476192 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 02:34:59.620014  476192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 02:34:59.635507  476192 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 02:34:59.654611  476192 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0116 02:34:59.654692  476192 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 02:34:59.665838  476192 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 02:34:59.665917  476192 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 02:34:59.677632  476192 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 02:34:59.689403  476192 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 02:34:59.700775  476192 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 02:34:59.712815  476192 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 02:34:59.723505  476192 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0116 02:34:59.723614  476192 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0116 02:34:59.738037  476192 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 02:34:59.747847  476192 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 02:34:59.857750  476192 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 02:35:00.294025  476192 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 02:35:00.294129  476192 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 02:35:00.299971  476192 start.go:543] Will wait 60s for crictl version
	I0116 02:35:00.300094  476192 ssh_runner.go:195] Run: which crictl
	I0116 02:35:00.304360  476192 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 02:35:00.345593  476192 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0116 02:35:00.345721  476192 ssh_runner.go:195] Run: crio --version
	I0116 02:35:00.396906  476192 ssh_runner.go:195] Run: crio --version
	I0116 02:35:00.447933  476192 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0116 02:35:00.449614  476192 main.go:141] libmachine: (addons-690916) Calling .GetIP
	I0116 02:35:00.452230  476192 main.go:141] libmachine: (addons-690916) DBG | domain addons-690916 has defined MAC address 52:54:00:5c:a7:a7 in network mk-addons-690916
	I0116 02:35:00.452583  476192 main.go:141] libmachine: (addons-690916) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:a7", ip: ""} in network mk-addons-690916: {Iface:virbr1 ExpiryTime:2024-01-16 03:34:50 +0000 UTC Type:0 Mac:52:54:00:5c:a7:a7 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-690916 Clientid:01:52:54:00:5c:a7:a7}
	I0116 02:35:00.452616  476192 main.go:141] libmachine: (addons-690916) DBG | domain addons-690916 has defined IP address 192.168.39.234 and MAC address 52:54:00:5c:a7:a7 in network mk-addons-690916
	I0116 02:35:00.452828  476192 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0116 02:35:00.457258  476192 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 02:35:00.469597  476192 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 02:35:00.469670  476192 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 02:35:00.505469  476192 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0116 02:35:00.505572  476192 ssh_runner.go:195] Run: which lz4
	I0116 02:35:00.509761  476192 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0116 02:35:00.514340  476192 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0116 02:35:00.514375  476192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0116 02:35:02.303604  476192 crio.go:444] Took 1.793872 seconds to copy over tarball
	I0116 02:35:02.303677  476192 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0116 02:35:05.489925  476192 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.186221442s)
	I0116 02:35:05.489957  476192 crio.go:451] Took 3.186324 seconds to extract the tarball
	I0116 02:35:05.489967  476192 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0116 02:35:05.532468  476192 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 02:35:05.611178  476192 crio.go:496] all images are preloaded for cri-o runtime.
	I0116 02:35:05.611210  476192 cache_images.go:84] Images are preloaded, skipping loading
	I0116 02:35:05.611281  476192 ssh_runner.go:195] Run: crio config
	I0116 02:35:05.671155  476192 cni.go:84] Creating CNI manager for ""
	I0116 02:35:05.671178  476192 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 02:35:05.671201  476192 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 02:35:05.671240  476192 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.234 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-690916 NodeName:addons-690916 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.234"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.234 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0116 02:35:05.671382  476192 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.234
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-690916"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.234
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.234"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 02:35:05.671472  476192 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=addons-690916 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.234
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-690916 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0116 02:35:05.671532  476192 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0116 02:35:05.681986  476192 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 02:35:05.682099  476192 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0116 02:35:05.692198  476192 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I0116 02:35:05.710357  476192 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0116 02:35:05.728090  476192 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2100 bytes)
	I0116 02:35:05.746383  476192 ssh_runner.go:195] Run: grep 192.168.39.234	control-plane.minikube.internal$ /etc/hosts
	I0116 02:35:05.751008  476192 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.234	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 02:35:05.764687  476192 certs.go:56] Setting up /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/addons-690916 for IP: 192.168.39.234
	I0116 02:35:05.764729  476192 certs.go:190] acquiring lock for shared ca certs: {Name:mkab5aeea3e0ed69f3592dc8f418e07c6075b130 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:35:05.764884  476192 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17965-468241/.minikube/ca.key
	I0116 02:35:05.959799  476192 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt ...
	I0116 02:35:05.959839  476192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt: {Name:mkef18da5bcac20bd65477fb2a0e49c1bc50fd80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:35:05.960011  476192 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17965-468241/.minikube/ca.key ...
	I0116 02:35:05.960022  476192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-468241/.minikube/ca.key: {Name:mka855c8c5ede9b781cfaaaa21b830d300a0018b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:35:05.960107  476192 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.key
	I0116 02:35:06.149526  476192 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.crt ...
	I0116 02:35:06.149568  476192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.crt: {Name:mk1121ad2a67a1fa70ef2be15f4220cfd2604641 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:35:06.149734  476192 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.key ...
	I0116 02:35:06.149745  476192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.key: {Name:mkb2bef51be335c85ece0e41af7d87bd377151dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:35:06.149856  476192 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/addons-690916/client.key
	I0116 02:35:06.149872  476192 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/addons-690916/client.crt with IP's: []
	I0116 02:35:06.230683  476192 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/addons-690916/client.crt ...
	I0116 02:35:06.230726  476192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/addons-690916/client.crt: {Name:mkd28a4b503d509e520822ecf61841946a131a30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:35:06.230926  476192 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/addons-690916/client.key ...
	I0116 02:35:06.230941  476192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/addons-690916/client.key: {Name:mkbeeadfc207f7e2cc74e32a1ce44cfd8cf6ddd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:35:06.231011  476192 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/addons-690916/apiserver.key.51b88da4
	I0116 02:35:06.231029  476192 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/addons-690916/apiserver.crt.51b88da4 with IP's: [192.168.39.234 10.96.0.1 127.0.0.1 10.0.0.1]
	I0116 02:35:06.462007  476192 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/addons-690916/apiserver.crt.51b88da4 ...
	I0116 02:35:06.462057  476192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/addons-690916/apiserver.crt.51b88da4: {Name:mk18a9319bee8df15e5e4c7e61d08a9c351a6f69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:35:06.462266  476192 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/addons-690916/apiserver.key.51b88da4 ...
	I0116 02:35:06.462290  476192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/addons-690916/apiserver.key.51b88da4: {Name:mk6663f64bfeebdafac799083d2a70b72a7d6d19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:35:06.462386  476192 certs.go:337] copying /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/addons-690916/apiserver.crt.51b88da4 -> /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/addons-690916/apiserver.crt
	I0116 02:35:06.462514  476192 certs.go:341] copying /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/addons-690916/apiserver.key.51b88da4 -> /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/addons-690916/apiserver.key
	I0116 02:35:06.462589  476192 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/addons-690916/proxy-client.key
	I0116 02:35:06.462611  476192 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/addons-690916/proxy-client.crt with IP's: []
	I0116 02:35:06.752560  476192 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/addons-690916/proxy-client.crt ...
	I0116 02:35:06.752594  476192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/addons-690916/proxy-client.crt: {Name:mkd03499b73e5d2043eb1d03f2d2ab41d19af0a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:35:06.752811  476192 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/addons-690916/proxy-client.key ...
	I0116 02:35:06.752831  476192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/addons-690916/proxy-client.key: {Name:mk1295ede37359fbb2119f9d3607b77c2c86f5cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:35:06.753208  476192 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca-key.pem (1679 bytes)
	I0116 02:35:06.753272  476192 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem (1078 bytes)
	I0116 02:35:06.753310  476192 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem (1123 bytes)
	I0116 02:35:06.753345  476192 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem (1679 bytes)
	I0116 02:35:06.754055  476192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/addons-690916/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0116 02:35:06.780278  476192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/addons-690916/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0116 02:35:06.805193  476192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/addons-690916/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0116 02:35:06.830274  476192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/addons-690916/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0116 02:35:06.855957  476192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 02:35:06.881412  476192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0116 02:35:06.906469  476192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 02:35:06.931997  476192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 02:35:06.957568  476192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 02:35:06.980950  476192 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0116 02:35:06.997411  476192 ssh_runner.go:195] Run: openssl version
	I0116 02:35:07.003629  476192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 02:35:07.015200  476192 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 02:35:07.020109  476192 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 02:35 /usr/share/ca-certificates/minikubeCA.pem
	I0116 02:35:07.020188  476192 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 02:35:07.026598  476192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 02:35:07.037813  476192 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 02:35:07.042648  476192 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0116 02:35:07.042696  476192 kubeadm.go:404] StartCluster: {Name:addons-690916 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:addons-690916 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.234 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 02:35:07.042819  476192 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0116 02:35:07.042866  476192 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 02:35:07.080728  476192 cri.go:89] found id: ""
	I0116 02:35:07.080820  476192 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0116 02:35:07.092090  476192 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 02:35:07.102691  476192 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 02:35:07.113571  476192 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 02:35:07.113635  476192 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0116 02:35:07.166532  476192 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0116 02:35:07.166656  476192 kubeadm.go:322] [preflight] Running pre-flight checks
	I0116 02:35:07.315266  476192 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0116 02:35:07.315380  476192 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0116 02:35:07.315494  476192 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0116 02:35:07.553686  476192 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0116 02:35:07.635774  476192 out.go:204]   - Generating certificates and keys ...
	I0116 02:35:07.635916  476192 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0116 02:35:07.635999  476192 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0116 02:35:07.891469  476192 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0116 02:35:08.197079  476192 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0116 02:35:08.398314  476192 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0116 02:35:08.545820  476192 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0116 02:35:08.826938  476192 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0116 02:35:08.827099  476192 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-690916 localhost] and IPs [192.168.39.234 127.0.0.1 ::1]
	I0116 02:35:08.902273  476192 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0116 02:35:08.902449  476192 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-690916 localhost] and IPs [192.168.39.234 127.0.0.1 ::1]
	I0116 02:35:09.096214  476192 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0116 02:35:09.174657  476192 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0116 02:35:09.393561  476192 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0116 02:35:09.393831  476192 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0116 02:35:09.578940  476192 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0116 02:35:09.633971  476192 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0116 02:35:09.702807  476192 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0116 02:35:10.027811  476192 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0116 02:35:10.028531  476192 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0116 02:35:10.030848  476192 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0116 02:35:10.032957  476192 out.go:204]   - Booting up control plane ...
	I0116 02:35:10.033063  476192 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0116 02:35:10.034147  476192 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0116 02:35:10.035078  476192 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0116 02:35:10.050987  476192 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0116 02:35:10.051358  476192 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0116 02:35:10.051450  476192 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0116 02:35:10.183175  476192 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0116 02:35:18.184275  476192 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.003338 seconds
	I0116 02:35:18.184416  476192 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0116 02:35:18.208304  476192 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0116 02:35:18.750139  476192 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0116 02:35:18.750503  476192 kubeadm.go:322] [mark-control-plane] Marking the node addons-690916 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0116 02:35:19.266214  476192 kubeadm.go:322] [bootstrap-token] Using token: nsh723.w17h3d63ofoy3d6o
	I0116 02:35:19.267773  476192 out.go:204]   - Configuring RBAC rules ...
	I0116 02:35:19.267904  476192 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0116 02:35:19.279115  476192 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0116 02:35:19.288985  476192 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0116 02:35:19.293251  476192 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0116 02:35:19.298220  476192 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0116 02:35:19.303429  476192 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0116 02:35:19.325818  476192 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0116 02:35:19.642123  476192 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0116 02:35:19.838990  476192 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0116 02:35:19.840902  476192 kubeadm.go:322] 
	I0116 02:35:19.841007  476192 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0116 02:35:19.841021  476192 kubeadm.go:322] 
	I0116 02:35:19.841099  476192 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0116 02:35:19.841106  476192 kubeadm.go:322] 
	I0116 02:35:19.841127  476192 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0116 02:35:19.841194  476192 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0116 02:35:19.841260  476192 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0116 02:35:19.841273  476192 kubeadm.go:322] 
	I0116 02:35:19.841335  476192 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0116 02:35:19.841431  476192 kubeadm.go:322] 
	I0116 02:35:19.841517  476192 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0116 02:35:19.841528  476192 kubeadm.go:322] 
	I0116 02:35:19.841604  476192 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0116 02:35:19.841698  476192 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0116 02:35:19.841798  476192 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0116 02:35:19.841811  476192 kubeadm.go:322] 
	I0116 02:35:19.841903  476192 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0116 02:35:19.842016  476192 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0116 02:35:19.842027  476192 kubeadm.go:322] 
	I0116 02:35:19.842111  476192 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token nsh723.w17h3d63ofoy3d6o \
	I0116 02:35:19.842222  476192 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ab75761e1119a1709a216b682cd7b1dcce0641115df5f236b54b16e4f66aa044 \
	I0116 02:35:19.842258  476192 kubeadm.go:322] 	--control-plane 
	I0116 02:35:19.842265  476192 kubeadm.go:322] 
	I0116 02:35:19.842367  476192 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0116 02:35:19.842386  476192 kubeadm.go:322] 
	I0116 02:35:19.842488  476192 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token nsh723.w17h3d63ofoy3d6o \
	I0116 02:35:19.842614  476192 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ab75761e1119a1709a216b682cd7b1dcce0641115df5f236b54b16e4f66aa044 
	I0116 02:35:19.843238  476192 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0116 02:35:19.843278  476192 cni.go:84] Creating CNI manager for ""
	I0116 02:35:19.843289  476192 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 02:35:19.845490  476192 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 02:35:19.847330  476192 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 02:35:19.881558  476192 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 02:35:19.899357  476192 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0116 02:35:19.899457  476192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:35:19.899470  476192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=6e8fa5f64d0e7272be43ff25ed3826261f0a2578 minikube.k8s.io/name=addons-690916 minikube.k8s.io/updated_at=2024_01_16T02_35_19_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:35:20.097780  476192 ops.go:34] apiserver oom_adj: -16
	I0116 02:35:20.097974  476192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:35:20.599065  476192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:35:21.098516  476192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:35:21.598396  476192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:35:22.098077  476192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:35:22.599006  476192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:35:23.098212  476192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:35:23.598263  476192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:35:24.098777  476192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:35:24.598598  476192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:35:25.098146  476192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:35:25.598333  476192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:35:26.098301  476192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:35:26.598411  476192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:35:27.098605  476192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:35:27.598803  476192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:35:28.098274  476192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:35:28.598328  476192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:35:29.098965  476192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:35:29.598490  476192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:35:30.098818  476192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:35:30.598282  476192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:35:31.098575  476192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:35:31.598980  476192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:35:31.703028  476192 kubeadm.go:1088] duration metric: took 11.803635416s to wait for elevateKubeSystemPrivileges.
	I0116 02:35:31.703071  476192 kubeadm.go:406] StartCluster complete in 24.660379063s
	I0116 02:35:31.703099  476192 settings.go:142] acquiring lock: {Name:mk3ded99c06d285b174e76622bb0741c8dd2d8fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:35:31.703271  476192 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17965-468241/kubeconfig
	I0116 02:35:31.703750  476192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-468241/kubeconfig: {Name:mkac3c63bbcba9b4fa1bea3480797757d703fb9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:35:31.703970  476192 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0116 02:35:31.704168  476192 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0116 02:35:31.704302  476192 config.go:182] Loaded profile config "addons-690916": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 02:35:31.704325  476192 addons.go:69] Setting inspektor-gadget=true in profile "addons-690916"
	I0116 02:35:31.704330  476192 addons.go:69] Setting helm-tiller=true in profile "addons-690916"
	I0116 02:35:31.704329  476192 addons.go:69] Setting default-storageclass=true in profile "addons-690916"
	I0116 02:35:31.704349  476192 addons.go:234] Setting addon helm-tiller=true in "addons-690916"
	I0116 02:35:31.704357  476192 addons.go:69] Setting metrics-server=true in profile "addons-690916"
	I0116 02:35:31.704350  476192 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-690916"
	I0116 02:35:31.704301  476192 addons.go:69] Setting ingress=true in profile "addons-690916"
	I0116 02:35:31.704365  476192 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-690916"
	I0116 02:35:31.704377  476192 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-690916"
	I0116 02:35:31.704384  476192 addons.go:234] Setting addon ingress=true in "addons-690916"
	I0116 02:35:31.704389  476192 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-690916"
	I0116 02:35:31.704391  476192 addons.go:69] Setting storage-provisioner=true in profile "addons-690916"
	I0116 02:35:31.704415  476192 addons.go:234] Setting addon storage-provisioner=true in "addons-690916"
	I0116 02:35:31.704415  476192 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-690916"
	I0116 02:35:31.704424  476192 host.go:66] Checking if "addons-690916" exists ...
	I0116 02:35:31.704439  476192 host.go:66] Checking if "addons-690916" exists ...
	I0116 02:35:31.704452  476192 host.go:66] Checking if "addons-690916" exists ...
	I0116 02:35:31.704458  476192 host.go:66] Checking if "addons-690916" exists ...
	I0116 02:35:31.704818  476192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:35:31.704300  476192 addons.go:69] Setting yakd=true in profile "addons-690916"
	I0116 02:35:31.704849  476192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:35:31.704849  476192 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-690916"
	I0116 02:35:31.704861  476192 addons.go:69] Setting volumesnapshots=true in profile "addons-690916"
	I0116 02:35:31.704863  476192 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-690916"
	I0116 02:35:31.704872  476192 addons.go:234] Setting addon volumesnapshots=true in "addons-690916"
	I0116 02:35:31.704880  476192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:35:31.704894  476192 host.go:66] Checking if "addons-690916" exists ...
	I0116 02:35:31.704894  476192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:35:31.704912  476192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:35:31.704917  476192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:35:31.704921  476192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:35:31.704928  476192 addons.go:69] Setting registry=true in profile "addons-690916"
	I0116 02:35:31.704882  476192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:35:31.704940  476192 addons.go:234] Setting addon registry=true in "addons-690916"
	I0116 02:35:31.704942  476192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:35:31.704959  476192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:35:31.704370  476192 addons.go:234] Setting addon metrics-server=true in "addons-690916"
	I0116 02:35:31.704317  476192 addons.go:69] Setting ingress-dns=true in profile "addons-690916"
	I0116 02:35:31.704985  476192 addons.go:234] Setting addon ingress-dns=true in "addons-690916"
	I0116 02:35:31.704316  476192 addons.go:69] Setting gcp-auth=true in profile "addons-690916"
	I0116 02:35:31.704994  476192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:35:31.705007  476192 mustload.go:65] Loading cluster: addons-690916
	I0116 02:35:31.704914  476192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:35:31.705037  476192 host.go:66] Checking if "addons-690916" exists ...
	I0116 02:35:31.704347  476192 addons.go:234] Setting addon inspektor-gadget=true in "addons-690916"
	I0116 02:35:31.705092  476192 host.go:66] Checking if "addons-690916" exists ...
	I0116 02:35:31.705227  476192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:35:31.705271  476192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:35:31.705360  476192 host.go:66] Checking if "addons-690916" exists ...
	I0116 02:35:31.705366  476192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:35:31.705385  476192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:35:31.705481  476192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:35:31.705504  476192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:35:31.704850  476192 addons.go:234] Setting addon yakd=true in "addons-690916"
	I0116 02:35:31.705577  476192 config.go:182] Loaded profile config "addons-690916": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 02:35:31.705649  476192 host.go:66] Checking if "addons-690916" exists ...
	I0116 02:35:31.705872  476192 addons.go:69] Setting cloud-spanner=true in profile "addons-690916"
	I0116 02:35:31.705918  476192 addons.go:234] Setting addon cloud-spanner=true in "addons-690916"
	I0116 02:35:31.705971  476192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:35:31.706010  476192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:35:31.705976  476192 host.go:66] Checking if "addons-690916" exists ...
	I0116 02:35:31.706092  476192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:35:31.706125  476192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:35:31.706266  476192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:35:31.706301  476192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:35:31.706521  476192 host.go:66] Checking if "addons-690916" exists ...
	I0116 02:35:31.706887  476192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:35:31.706917  476192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:35:31.705981  476192 host.go:66] Checking if "addons-690916" exists ...
	I0116 02:35:31.725437  476192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38437
	I0116 02:35:31.725750  476192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34479
	I0116 02:35:31.726142  476192 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:35:31.726468  476192 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:35:31.726656  476192 main.go:141] libmachine: Using API Version  1
	I0116 02:35:31.726670  476192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:35:31.727000  476192 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:35:31.727575  476192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:35:31.727604  476192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:35:31.728595  476192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:35:31.728645  476192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:35:31.729593  476192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:35:31.729654  476192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:35:31.732435  476192 main.go:141] libmachine: Using API Version  1
	I0116 02:35:31.732462  476192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:35:31.732527  476192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38707
	I0116 02:35:31.732557  476192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41961
	I0116 02:35:31.733110  476192 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:35:31.733574  476192 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:35:31.733733  476192 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:35:31.733821  476192 main.go:141] libmachine: (addons-690916) Calling .GetState
	I0116 02:35:31.734150  476192 main.go:141] libmachine: Using API Version  1
	I0116 02:35:31.734175  476192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:35:31.734299  476192 main.go:141] libmachine: Using API Version  1
	I0116 02:35:31.734325  476192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:35:31.734670  476192 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:35:31.735045  476192 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:35:31.735410  476192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:35:31.735440  476192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:35:31.737899  476192 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-690916"
	I0116 02:35:31.737961  476192 host.go:66] Checking if "addons-690916" exists ...
	I0116 02:35:31.738393  476192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:35:31.738445  476192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:35:31.738810  476192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40837
	I0116 02:35:31.739486  476192 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:35:31.740699  476192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:35:31.740752  476192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:35:31.741136  476192 main.go:141] libmachine: Using API Version  1
	I0116 02:35:31.741158  476192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:35:31.742308  476192 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:35:31.742546  476192 main.go:141] libmachine: (addons-690916) Calling .GetState
	I0116 02:35:31.748656  476192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44559
	I0116 02:35:31.749467  476192 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:35:31.750176  476192 main.go:141] libmachine: Using API Version  1
	I0116 02:35:31.750198  476192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:35:31.750602  476192 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:35:31.751161  476192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:35:31.751216  476192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:35:31.752731  476192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42531
	I0116 02:35:31.753174  476192 addons.go:234] Setting addon default-storageclass=true in "addons-690916"
	I0116 02:35:31.753224  476192 host.go:66] Checking if "addons-690916" exists ...
	I0116 02:35:31.753334  476192 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:35:31.753645  476192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:35:31.753692  476192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:35:31.754070  476192 main.go:141] libmachine: Using API Version  1
	I0116 02:35:31.754088  476192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:35:31.754624  476192 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:35:31.755291  476192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:35:31.755341  476192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:35:31.772443  476192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37867
	I0116 02:35:31.773042  476192 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:35:31.773813  476192 main.go:141] libmachine: Using API Version  1
	I0116 02:35:31.773840  476192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:35:31.774296  476192 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:35:31.774921  476192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37239
	I0116 02:35:31.775502  476192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:35:31.775555  476192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:35:31.775840  476192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40079
	I0116 02:35:31.776914  476192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44157
	I0116 02:35:31.777097  476192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37665
	I0116 02:35:31.777548  476192 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:35:31.777786  476192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43561
	I0116 02:35:31.777941  476192 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:35:31.777998  476192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36447
	I0116 02:35:31.778148  476192 main.go:141] libmachine: Using API Version  1
	I0116 02:35:31.778175  476192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:35:31.778384  476192 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:35:31.778519  476192 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:35:31.778743  476192 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:35:31.779089  476192 main.go:141] libmachine: Using API Version  1
	I0116 02:35:31.779114  476192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:35:31.779150  476192 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:35:31.779432  476192 main.go:141] libmachine: Using API Version  1
	I0116 02:35:31.779459  476192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:35:31.779573  476192 main.go:141] libmachine: Using API Version  1
	I0116 02:35:31.779603  476192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:35:31.779692  476192 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:35:31.779938  476192 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:35:31.780393  476192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:35:31.780429  476192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:35:31.780723  476192 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:35:31.780810  476192 main.go:141] libmachine: Using API Version  1
	I0116 02:35:31.780833  476192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:35:31.781080  476192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:35:31.781117  476192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:35:31.781354  476192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:35:31.781396  476192 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:35:31.781402  476192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:35:31.781604  476192 main.go:141] libmachine: (addons-690916) Calling .GetState
	I0116 02:35:31.781752  476192 main.go:141] libmachine: Using API Version  1
	I0116 02:35:31.781773  476192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:35:31.781836  476192 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:35:31.782104  476192 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:35:31.782369  476192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:35:31.782417  476192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:35:31.784994  476192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37911
	I0116 02:35:31.785479  476192 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:35:31.785944  476192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35273
	I0116 02:35:31.786577  476192 main.go:141] libmachine: Using API Version  1
	I0116 02:35:31.786596  476192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:35:31.786669  476192 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:35:31.787076  476192 host.go:66] Checking if "addons-690916" exists ...
	I0116 02:35:31.787456  476192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:35:31.787495  476192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:35:31.787782  476192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33011
	I0116 02:35:31.787805  476192 main.go:141] libmachine: Using API Version  1
	I0116 02:35:31.787820  476192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:35:31.788398  476192 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:35:31.788613  476192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38829
	I0116 02:35:31.788787  476192 main.go:141] libmachine: (addons-690916) Calling .GetState
	I0116 02:35:31.788841  476192 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:35:31.789044  476192 main.go:141] libmachine: (addons-690916) Calling .GetState
	I0116 02:35:31.789252  476192 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:35:31.789989  476192 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:35:31.790516  476192 main.go:141] libmachine: Using API Version  1
	I0116 02:35:31.790529  476192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:35:31.790594  476192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36317
	I0116 02:35:31.790757  476192 main.go:141] libmachine: (addons-690916) Calling .DriverName
	I0116 02:35:31.790893  476192 main.go:141] libmachine: Using API Version  1
	I0116 02:35:31.790909  476192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:35:31.790965  476192 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:35:31.793201  476192 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I0116 02:35:31.791590  476192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:35:31.791615  476192 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:35:31.792043  476192 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:35:31.792630  476192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:35:31.794679  476192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:35:31.794774  476192 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0116 02:35:31.794797  476192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0116 02:35:31.794822  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHHostname
	I0116 02:35:31.794944  476192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:35:31.795235  476192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34215
	I0116 02:35:31.795413  476192 main.go:141] libmachine: (addons-690916) Calling .GetState
	I0116 02:35:31.795635  476192 main.go:141] libmachine: (addons-690916) Calling .DriverName
	I0116 02:35:31.797690  476192 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 02:35:31.796228  476192 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:35:31.797274  476192 main.go:141] libmachine: Using API Version  1
	I0116 02:35:31.798236  476192 main.go:141] libmachine: (addons-690916) Calling .DriverName
	I0116 02:35:31.799119  476192 main.go:141] libmachine: (addons-690916) DBG | domain addons-690916 has defined MAC address 52:54:00:5c:a7:a7 in network mk-addons-690916
	I0116 02:35:31.799286  476192 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 02:35:31.799294  476192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0116 02:35:31.799307  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHHostname
	I0116 02:35:31.799371  476192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:35:31.801278  476192 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0116 02:35:31.799872  476192 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:35:31.799932  476192 main.go:141] libmachine: (addons-690916) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:a7", ip: ""} in network mk-addons-690916: {Iface:virbr1 ExpiryTime:2024-01-16 03:34:50 +0000 UTC Type:0 Mac:52:54:00:5c:a7:a7 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-690916 Clientid:01:52:54:00:5c:a7:a7}
	I0116 02:35:31.800433  476192 main.go:141] libmachine: Using API Version  1
	I0116 02:35:31.800631  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHPort
	I0116 02:35:31.802605  476192 main.go:141] libmachine: (addons-690916) DBG | domain addons-690916 has defined MAC address 52:54:00:5c:a7:a7 in network mk-addons-690916
	I0116 02:35:31.804106  476192 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0116 02:35:31.802822  476192 main.go:141] libmachine: (addons-690916) DBG | domain addons-690916 has defined IP address 192.168.39.234 and MAC address 52:54:00:5c:a7:a7 in network mk-addons-690916
	I0116 02:35:31.802903  476192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:35:31.803301  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHKeyPath
	I0116 02:35:31.803723  476192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:35:31.803760  476192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38209
	I0116 02:35:31.804235  476192 main.go:141] libmachine: (addons-690916) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:a7", ip: ""} in network mk-addons-690916: {Iface:virbr1 ExpiryTime:2024-01-16 03:34:50 +0000 UTC Type:0 Mac:52:54:00:5c:a7:a7 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-690916 Clientid:01:52:54:00:5c:a7:a7}
	I0116 02:35:31.804406  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHPort
	I0116 02:35:31.806971  476192 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0116 02:35:31.805642  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHKeyPath
	I0116 02:35:31.805681  476192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:35:31.805691  476192 main.go:141] libmachine: (addons-690916) DBG | domain addons-690916 has defined IP address 192.168.39.234 and MAC address 52:54:00:5c:a7:a7 in network mk-addons-690916
	I0116 02:35:31.805831  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHUsername
	I0116 02:35:31.806140  476192 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:35:31.806371  476192 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:35:31.809833  476192 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0116 02:35:31.808791  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHUsername
	I0116 02:35:31.808823  476192 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/addons-690916/id_rsa Username:docker}
	I0116 02:35:31.808861  476192 main.go:141] libmachine: (addons-690916) Calling .GetState
	I0116 02:35:31.809460  476192 main.go:141] libmachine: Using API Version  1
	I0116 02:35:31.811207  476192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:35:31.813158  476192 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0116 02:35:31.812173  476192 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/addons-690916/id_rsa Username:docker}
	I0116 02:35:31.812572  476192 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:35:31.813351  476192 main.go:141] libmachine: (addons-690916) Calling .DriverName
	I0116 02:35:31.815836  476192 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0116 02:35:31.815264  476192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:35:31.818695  476192 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0116 02:35:31.817315  476192 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0116 02:35:31.817350  476192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:35:31.821663  476192 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0116 02:35:31.820798  476192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36885
	I0116 02:35:31.823130  476192 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0116 02:35:31.823151  476192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0116 02:35:31.823177  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHHostname
	I0116 02:35:31.823304  476192 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0116 02:35:31.823317  476192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0116 02:35:31.823331  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHHostname
	I0116 02:35:31.824297  476192 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:35:31.824982  476192 main.go:141] libmachine: Using API Version  1
	I0116 02:35:31.825004  476192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:35:31.825606  476192 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:35:31.825873  476192 main.go:141] libmachine: (addons-690916) Calling .GetState
	I0116 02:35:31.825895  476192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38131
	I0116 02:35:31.825938  476192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37251
	I0116 02:35:31.826259  476192 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:35:31.826429  476192 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:35:31.826841  476192 main.go:141] libmachine: Using API Version  1
	I0116 02:35:31.826864  476192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:35:31.827004  476192 main.go:141] libmachine: Using API Version  1
	I0116 02:35:31.827023  476192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:35:31.827183  476192 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:35:31.827466  476192 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:35:31.827532  476192 main.go:141] libmachine: (addons-690916) Calling .GetState
	I0116 02:35:31.827636  476192 main.go:141] libmachine: (addons-690916) Calling .GetState
	I0116 02:35:31.828269  476192 main.go:141] libmachine: (addons-690916) DBG | domain addons-690916 has defined MAC address 52:54:00:5c:a7:a7 in network mk-addons-690916
	I0116 02:35:31.828692  476192 main.go:141] libmachine: (addons-690916) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:a7", ip: ""} in network mk-addons-690916: {Iface:virbr1 ExpiryTime:2024-01-16 03:34:50 +0000 UTC Type:0 Mac:52:54:00:5c:a7:a7 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-690916 Clientid:01:52:54:00:5c:a7:a7}
	I0116 02:35:31.828713  476192 main.go:141] libmachine: (addons-690916) DBG | domain addons-690916 has defined IP address 192.168.39.234 and MAC address 52:54:00:5c:a7:a7 in network mk-addons-690916
	I0116 02:35:31.828968  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHPort
	I0116 02:35:31.829128  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHKeyPath
	I0116 02:35:31.829260  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHUsername
	I0116 02:35:31.829429  476192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43125
	I0116 02:35:31.829614  476192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37311
	I0116 02:35:31.829644  476192 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/addons-690916/id_rsa Username:docker}
	I0116 02:35:31.829851  476192 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:35:31.829874  476192 main.go:141] libmachine: (addons-690916) Calling .DriverName
	I0116 02:35:31.830034  476192 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:35:31.830147  476192 main.go:141] libmachine: (addons-690916) Calling .DriverName
	I0116 02:35:31.830296  476192 main.go:141] libmachine: Using API Version  1
	I0116 02:35:31.832556  476192 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0116 02:35:31.830310  476192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:35:31.830522  476192 main.go:141] libmachine: (addons-690916) Calling .DriverName
	I0116 02:35:31.831049  476192 main.go:141] libmachine: Using API Version  1
	I0116 02:35:31.831658  476192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40539
	I0116 02:35:31.833861  476192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44377
	I0116 02:35:31.834364  476192 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0116 02:35:31.834907  476192 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:35:31.835741  476192 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0116 02:35:31.835775  476192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:35:31.835801  476192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0116 02:35:31.836310  476192 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:35:31.836323  476192 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:35:31.837239  476192 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I0116 02:35:31.837259  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHHostname
	I0116 02:35:31.837713  476192 main.go:141] libmachine: Using API Version  1
	I0116 02:35:31.837961  476192 main.go:141] libmachine: (addons-690916) DBG | domain addons-690916 has defined MAC address 52:54:00:5c:a7:a7 in network mk-addons-690916
	I0116 02:35:31.838802  476192 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0116 02:35:31.838833  476192 main.go:141] libmachine: (addons-690916) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:a7", ip: ""} in network mk-addons-690916: {Iface:virbr1 ExpiryTime:2024-01-16 03:34:50 +0000 UTC Type:0 Mac:52:54:00:5c:a7:a7 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-690916 Clientid:01:52:54:00:5c:a7:a7}
	I0116 02:35:31.838550  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHPort
	I0116 02:35:31.838835  476192 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:35:31.837967  476192 main.go:141] libmachine: (addons-690916) Calling .GetState
	I0116 02:35:31.838893  476192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:35:31.839318  476192 main.go:141] libmachine: Using API Version  1
	I0116 02:35:31.840513  476192 main.go:141] libmachine: (addons-690916) DBG | domain addons-690916 has defined IP address 192.168.39.234 and MAC address 52:54:00:5c:a7:a7 in network mk-addons-690916
	I0116 02:35:31.840527  476192 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.5
	I0116 02:35:31.840788  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHKeyPath
	I0116 02:35:31.840822  476192 main.go:141] libmachine: (addons-690916) Calling .GetState
	I0116 02:35:31.841130  476192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40991
	I0116 02:35:31.842160  476192 main.go:141] libmachine: (addons-690916) DBG | domain addons-690916 has defined MAC address 52:54:00:5c:a7:a7 in network mk-addons-690916
	I0116 02:35:31.842162  476192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:35:31.842178  476192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33613
	I0116 02:35:31.842292  476192 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0116 02:35:31.843662  476192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0116 02:35:31.843684  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHHostname
	I0116 02:35:31.843427  476192 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:35:31.843441  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHPort
	I0116 02:35:31.843461  476192 main.go:141] libmachine: (addons-690916) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:a7", ip: ""} in network mk-addons-690916: {Iface:virbr1 ExpiryTime:2024-01-16 03:34:50 +0000 UTC Type:0 Mac:52:54:00:5c:a7:a7 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-690916 Clientid:01:52:54:00:5c:a7:a7}
	I0116 02:35:31.843844  476192 main.go:141] libmachine: (addons-690916) DBG | domain addons-690916 has defined IP address 192.168.39.234 and MAC address 52:54:00:5c:a7:a7 in network mk-addons-690916
	I0116 02:35:31.843461  476192 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:35:31.843878  476192 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0116 02:35:31.843888  476192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I0116 02:35:31.843475  476192 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:35:31.843904  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHHostname
	I0116 02:35:31.843495  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHUsername
	I0116 02:35:31.843940  476192 main.go:141] libmachine: (addons-690916) Calling .GetState
	I0116 02:35:31.844534  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHKeyPath
	I0116 02:35:31.844542  476192 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:35:31.844615  476192 main.go:141] libmachine: (addons-690916) Calling .DriverName
	I0116 02:35:31.844609  476192 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/addons-690916/id_rsa Username:docker}
	I0116 02:35:31.845386  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHUsername
	I0116 02:35:31.845561  476192 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/addons-690916/id_rsa Username:docker}
	I0116 02:35:31.846149  476192 main.go:141] libmachine: (addons-690916) Calling .DriverName
	I0116 02:35:31.846649  476192 main.go:141] libmachine: (addons-690916) Calling .DriverName
	I0116 02:35:31.848607  476192 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.13
	I0116 02:35:31.847434  476192 main.go:141] libmachine: (addons-690916) Calling .DriverName
	I0116 02:35:31.847491  476192 main.go:141] libmachine: (addons-690916) DBG | domain addons-690916 has defined MAC address 52:54:00:5c:a7:a7 in network mk-addons-690916
	I0116 02:35:31.847663  476192 main.go:141] libmachine: Using API Version  1
	I0116 02:35:31.848141  476192 main.go:141] libmachine: Using API Version  1
	I0116 02:35:31.848444  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHPort
	I0116 02:35:31.849206  476192 main.go:141] libmachine: (addons-690916) DBG | domain addons-690916 has defined MAC address 52:54:00:5c:a7:a7 in network mk-addons-690916
	I0116 02:35:31.849909  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHPort
	I0116 02:35:31.850497  476192 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0116 02:35:31.852188  476192 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0116 02:35:31.852206  476192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0116 02:35:31.852226  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHHostname
	I0116 02:35:31.850500  476192 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0116 02:35:31.852280  476192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0116 02:35:31.852307  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHHostname
	I0116 02:35:31.850541  476192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:35:31.850621  476192 main.go:141] libmachine: (addons-690916) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:a7", ip: ""} in network mk-addons-690916: {Iface:virbr1 ExpiryTime:2024-01-16 03:34:50 +0000 UTC Type:0 Mac:52:54:00:5c:a7:a7 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-690916 Clientid:01:52:54:00:5c:a7:a7}
	I0116 02:35:31.852382  476192 main.go:141] libmachine: (addons-690916) DBG | domain addons-690916 has defined IP address 192.168.39.234 and MAC address 52:54:00:5c:a7:a7 in network mk-addons-690916
	I0116 02:35:31.850668  476192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:35:31.850690  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHKeyPath
	I0116 02:35:31.850701  476192 main.go:141] libmachine: (addons-690916) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:a7", ip: ""} in network mk-addons-690916: {Iface:virbr1 ExpiryTime:2024-01-16 03:34:50 +0000 UTC Type:0 Mac:52:54:00:5c:a7:a7 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-690916 Clientid:01:52:54:00:5c:a7:a7}
	I0116 02:35:31.850818  476192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38213
	I0116 02:35:31.850998  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHKeyPath
	I0116 02:35:31.851887  476192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40785
	I0116 02:35:31.852808  476192 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:35:31.854087  476192 out.go:177]   - Using image docker.io/registry:2.8.3
	I0116 02:35:31.852825  476192 main.go:141] libmachine: (addons-690916) DBG | domain addons-690916 has defined IP address 192.168.39.234 and MAC address 52:54:00:5c:a7:a7 in network mk-addons-690916
	I0116 02:35:31.853934  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHUsername
	I0116 02:35:31.853905  476192 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:35:31.854378  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHUsername
	I0116 02:35:31.854575  476192 main.go:141] libmachine: (addons-690916) Calling .GetState
	I0116 02:35:31.854949  476192 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:35:31.854999  476192 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:35:31.856301  476192 main.go:141] libmachine: (addons-690916) DBG | domain addons-690916 has defined MAC address 52:54:00:5c:a7:a7 in network mk-addons-690916
	I0116 02:35:31.856358  476192 main.go:141] libmachine: (addons-690916) DBG | domain addons-690916 has defined MAC address 52:54:00:5c:a7:a7 in network mk-addons-690916
	I0116 02:35:31.858150  476192 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0116 02:35:31.859808  476192 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0116 02:35:31.859827  476192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0116 02:35:31.858181  476192 main.go:141] libmachine: (addons-690916) Calling .DriverName
	I0116 02:35:31.859848  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHHostname
	I0116 02:35:31.856746  476192 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/addons-690916/id_rsa Username:docker}
	I0116 02:35:31.856955  476192 main.go:141] libmachine: (addons-690916) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:a7", ip: ""} in network mk-addons-690916: {Iface:virbr1 ExpiryTime:2024-01-16 03:34:50 +0000 UTC Type:0 Mac:52:54:00:5c:a7:a7 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-690916 Clientid:01:52:54:00:5c:a7:a7}
	I0116 02:35:31.860193  476192 main.go:141] libmachine: (addons-690916) DBG | domain addons-690916 has defined IP address 192.168.39.234 and MAC address 52:54:00:5c:a7:a7 in network mk-addons-690916
	I0116 02:35:31.857096  476192 main.go:141] libmachine: Using API Version  1
	I0116 02:35:31.861777  476192 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1
	I0116 02:35:31.860228  476192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:35:31.857235  476192 main.go:141] libmachine: (addons-690916) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:a7", ip: ""} in network mk-addons-690916: {Iface:virbr1 ExpiryTime:2024-01-16 03:34:50 +0000 UTC Type:0 Mac:52:54:00:5c:a7:a7 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-690916 Clientid:01:52:54:00:5c:a7:a7}
	I0116 02:35:31.857263  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHPort
	I0116 02:35:31.857393  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHPort
	I0116 02:35:31.857713  476192 main.go:141] libmachine: Using API Version  1
	I0116 02:35:31.856726  476192 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/addons-690916/id_rsa Username:docker}
	I0116 02:35:31.857155  476192 main.go:141] libmachine: (addons-690916) Calling .GetState
	I0116 02:35:31.863260  476192 main.go:141] libmachine: (addons-690916) DBG | domain addons-690916 has defined MAC address 52:54:00:5c:a7:a7 in network mk-addons-690916
	I0116 02:35:31.863362  476192 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0116 02:35:31.863375  476192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0116 02:35:31.863392  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHHostname
	I0116 02:35:31.863416  476192 main.go:141] libmachine: (addons-690916) DBG | domain addons-690916 has defined IP address 192.168.39.234 and MAC address 52:54:00:5c:a7:a7 in network mk-addons-690916
	I0116 02:35:31.863435  476192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:35:31.863613  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHKeyPath
	I0116 02:35:31.863751  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHUsername
	I0116 02:35:31.863849  476192 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/addons-690916/id_rsa Username:docker}
	I0116 02:35:31.863982  476192 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:35:31.864253  476192 main.go:141] libmachine: (addons-690916) Calling .GetState
	I0116 02:35:31.864485  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHKeyPath
	I0116 02:35:31.864784  476192 main.go:141] libmachine: (addons-690916) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:a7", ip: ""} in network mk-addons-690916: {Iface:virbr1 ExpiryTime:2024-01-16 03:34:50 +0000 UTC Type:0 Mac:52:54:00:5c:a7:a7 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-690916 Clientid:01:52:54:00:5c:a7:a7}
	I0116 02:35:31.864807  476192 main.go:141] libmachine: (addons-690916) DBG | domain addons-690916 has defined IP address 192.168.39.234 and MAC address 52:54:00:5c:a7:a7 in network mk-addons-690916
	I0116 02:35:31.865081  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHPort
	I0116 02:35:31.865259  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHKeyPath
	I0116 02:35:31.865398  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHUsername
	I0116 02:35:31.865403  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHUsername
	I0116 02:35:31.865731  476192 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:35:31.865801  476192 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/addons-690916/id_rsa Username:docker}
	I0116 02:35:31.866142  476192 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/addons-690916/id_rsa Username:docker}
	I0116 02:35:31.866465  476192 main.go:141] libmachine: (addons-690916) Calling .DriverName
	I0116 02:35:31.866537  476192 main.go:141] libmachine: (addons-690916) Calling .DriverName
	I0116 02:35:31.866799  476192 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0116 02:35:31.866819  476192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0116 02:35:31.868603  476192 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0116 02:35:31.866842  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHHostname
	I0116 02:35:31.867104  476192 main.go:141] libmachine: (addons-690916) Calling .GetState
	I0116 02:35:31.867197  476192 main.go:141] libmachine: (addons-690916) DBG | domain addons-690916 has defined MAC address 52:54:00:5c:a7:a7 in network mk-addons-690916
	I0116 02:35:31.867817  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHPort
	I0116 02:35:31.871582  476192 out.go:177]   - Using image docker.io/busybox:stable
	I0116 02:35:31.870213  476192 main.go:141] libmachine: (addons-690916) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:a7", ip: ""} in network mk-addons-690916: {Iface:virbr1 ExpiryTime:2024-01-16 03:34:50 +0000 UTC Type:0 Mac:52:54:00:5c:a7:a7 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-690916 Clientid:01:52:54:00:5c:a7:a7}
	I0116 02:35:31.870408  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHKeyPath
	I0116 02:35:31.872404  476192 main.go:141] libmachine: (addons-690916) Calling .DriverName
	I0116 02:35:31.873128  476192 main.go:141] libmachine: (addons-690916) DBG | domain addons-690916 has defined IP address 192.168.39.234 and MAC address 52:54:00:5c:a7:a7 in network mk-addons-690916
	I0116 02:35:31.873219  476192 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0116 02:35:31.873241  476192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0116 02:35:31.873262  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHHostname
	I0116 02:35:31.873711  476192 main.go:141] libmachine: (addons-690916) DBG | domain addons-690916 has defined MAC address 52:54:00:5c:a7:a7 in network mk-addons-690916
	I0116 02:35:31.874402  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHUsername
	I0116 02:35:31.874579  476192 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/addons-690916/id_rsa Username:docker}
	I0116 02:35:31.874850  476192 main.go:141] libmachine: (addons-690916) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:a7", ip: ""} in network mk-addons-690916: {Iface:virbr1 ExpiryTime:2024-01-16 03:34:50 +0000 UTC Type:0 Mac:52:54:00:5c:a7:a7 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-690916 Clientid:01:52:54:00:5c:a7:a7}
	I0116 02:35:31.874873  476192 main.go:141] libmachine: (addons-690916) DBG | domain addons-690916 has defined IP address 192.168.39.234 and MAC address 52:54:00:5c:a7:a7 in network mk-addons-690916
	I0116 02:35:31.875038  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHPort
	I0116 02:35:31.875246  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHKeyPath
	I0116 02:35:31.875464  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHUsername
	I0116 02:35:31.875619  476192 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/addons-690916/id_rsa Username:docker}
	I0116 02:35:31.878444  476192 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	W0116 02:35:31.877031  476192 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:58602->192.168.39.234:22: read: connection reset by peer
	I0116 02:35:31.877209  476192 main.go:141] libmachine: (addons-690916) DBG | domain addons-690916 has defined MAC address 52:54:00:5c:a7:a7 in network mk-addons-690916
	I0116 02:35:31.877816  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHPort
	I0116 02:35:31.880141  476192 retry.go:31] will retry after 259.949899ms: ssh: handshake failed: read tcp 192.168.39.1:58602->192.168.39.234:22: read: connection reset by peer
	I0116 02:35:31.880152  476192 main.go:141] libmachine: (addons-690916) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:a7", ip: ""} in network mk-addons-690916: {Iface:virbr1 ExpiryTime:2024-01-16 03:34:50 +0000 UTC Type:0 Mac:52:54:00:5c:a7:a7 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-690916 Clientid:01:52:54:00:5c:a7:a7}
	I0116 02:35:31.880195  476192 main.go:141] libmachine: (addons-690916) DBG | domain addons-690916 has defined IP address 192.168.39.234 and MAC address 52:54:00:5c:a7:a7 in network mk-addons-690916
	I0116 02:35:31.880254  476192 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0116 02:35:31.880270  476192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0116 02:35:31.880286  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHHostname
	I0116 02:35:31.880349  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHKeyPath
	I0116 02:35:31.880557  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHUsername
	I0116 02:35:31.880749  476192 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/addons-690916/id_rsa Username:docker}
	I0116 02:35:31.883443  476192 main.go:141] libmachine: (addons-690916) DBG | domain addons-690916 has defined MAC address 52:54:00:5c:a7:a7 in network mk-addons-690916
	I0116 02:35:31.883902  476192 main.go:141] libmachine: (addons-690916) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:a7", ip: ""} in network mk-addons-690916: {Iface:virbr1 ExpiryTime:2024-01-16 03:34:50 +0000 UTC Type:0 Mac:52:54:00:5c:a7:a7 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-690916 Clientid:01:52:54:00:5c:a7:a7}
	I0116 02:35:31.883933  476192 main.go:141] libmachine: (addons-690916) DBG | domain addons-690916 has defined IP address 192.168.39.234 and MAC address 52:54:00:5c:a7:a7 in network mk-addons-690916
	I0116 02:35:31.884096  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHPort
	I0116 02:35:31.884319  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHKeyPath
	I0116 02:35:31.884485  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHUsername
	I0116 02:35:31.884642  476192 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/addons-690916/id_rsa Username:docker}
	I0116 02:35:32.039415  476192 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0116 02:35:32.074464  476192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0116 02:35:32.074692  476192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 02:35:32.249275  476192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0116 02:35:32.270728  476192 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0116 02:35:32.270755  476192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0116 02:35:32.313084  476192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0116 02:35:32.317653  476192 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0116 02:35:32.317679  476192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0116 02:35:32.323256  476192 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0116 02:35:32.323277  476192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0116 02:35:32.327608  476192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0116 02:35:32.330225  476192 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0116 02:35:32.330256  476192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0116 02:35:32.336696  476192 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0116 02:35:32.336728  476192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0116 02:35:32.340231  476192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0116 02:35:32.358146  476192 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0116 02:35:32.358170  476192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0116 02:35:32.372521  476192 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0116 02:35:32.372551  476192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0116 02:35:32.402353  476192 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-690916" context rescaled to 1 replicas
	I0116 02:35:32.402427  476192 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.234 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 02:35:32.405370  476192 out.go:177] * Verifying Kubernetes components...
	I0116 02:35:32.407012  476192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 02:35:32.428544  476192 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0116 02:35:32.428575  476192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0116 02:35:32.611158  476192 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0116 02:35:32.611192  476192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0116 02:35:32.665089  476192 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0116 02:35:32.665122  476192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0116 02:35:32.675675  476192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0116 02:35:32.687030  476192 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0116 02:35:32.687057  476192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0116 02:35:32.694727  476192 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0116 02:35:32.694750  476192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0116 02:35:32.696850  476192 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0116 02:35:32.696882  476192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0116 02:35:32.699915  476192 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0116 02:35:32.699944  476192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0116 02:35:32.717321  476192 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0116 02:35:32.717354  476192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0116 02:35:32.736962  476192 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0116 02:35:32.736995  476192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0116 02:35:32.797408  476192 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0116 02:35:32.797439  476192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0116 02:35:32.882106  476192 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 02:35:32.882139  476192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0116 02:35:32.889081  476192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0116 02:35:32.893613  476192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0116 02:35:32.918437  476192 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0116 02:35:32.918466  476192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0116 02:35:32.923570  476192 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0116 02:35:32.923592  476192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0116 02:35:32.944263  476192 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0116 02:35:32.944288  476192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0116 02:35:32.956611  476192 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0116 02:35:32.956633  476192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0116 02:35:33.007107  476192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 02:35:33.040680  476192 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0116 02:35:33.040715  476192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0116 02:35:33.070675  476192 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0116 02:35:33.070703  476192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0116 02:35:33.070741  476192 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0116 02:35:33.070761  476192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0116 02:35:33.083641  476192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0116 02:35:33.132645  476192 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0116 02:35:33.132679  476192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0116 02:35:33.165966  476192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0116 02:35:33.167920  476192 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0116 02:35:33.167942  476192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0116 02:35:33.224057  476192 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0116 02:35:33.224103  476192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0116 02:35:33.241886  476192 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0116 02:35:33.241922  476192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0116 02:35:33.305697  476192 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0116 02:35:33.305724  476192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0116 02:35:33.360442  476192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0116 02:35:33.382287  476192 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0116 02:35:33.382331  476192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0116 02:35:33.461594  476192 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0116 02:35:33.461631  476192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0116 02:35:33.537893  476192 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0116 02:35:33.537928  476192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0116 02:35:33.588476  476192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0116 02:35:36.863507  476192 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (4.82404206s)
	I0116 02:35:36.863543  476192 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0116 02:35:36.863591  476192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.789086677s)
	I0116 02:35:36.863644  476192 main.go:141] libmachine: Making call to close driver server
	I0116 02:35:36.863665  476192 main.go:141] libmachine: (addons-690916) Calling .Close
	I0116 02:35:36.863872  476192 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:35:36.863892  476192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:35:36.863900  476192 main.go:141] libmachine: (addons-690916) DBG | Closing plugin on server side
	I0116 02:35:36.863912  476192 main.go:141] libmachine: Making call to close driver server
	I0116 02:35:36.863925  476192 main.go:141] libmachine: (addons-690916) Calling .Close
	I0116 02:35:36.864239  476192 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:35:36.864253  476192 main.go:141] libmachine: (addons-690916) DBG | Closing plugin on server side
	I0116 02:35:36.864262  476192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:35:39.621853  476192 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0116 02:35:39.621954  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHHostname
	I0116 02:35:39.625615  476192 main.go:141] libmachine: (addons-690916) DBG | domain addons-690916 has defined MAC address 52:54:00:5c:a7:a7 in network mk-addons-690916
	I0116 02:35:39.626136  476192 main.go:141] libmachine: (addons-690916) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:a7", ip: ""} in network mk-addons-690916: {Iface:virbr1 ExpiryTime:2024-01-16 03:34:50 +0000 UTC Type:0 Mac:52:54:00:5c:a7:a7 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-690916 Clientid:01:52:54:00:5c:a7:a7}
	I0116 02:35:39.626231  476192 main.go:141] libmachine: (addons-690916) DBG | domain addons-690916 has defined IP address 192.168.39.234 and MAC address 52:54:00:5c:a7:a7 in network mk-addons-690916
	I0116 02:35:39.626370  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHPort
	I0116 02:35:39.626706  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHKeyPath
	I0116 02:35:39.626908  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHUsername
	I0116 02:35:39.627120  476192 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/addons-690916/id_rsa Username:docker}
	I0116 02:35:39.839168  476192 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0116 02:35:39.853467  476192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.604144669s)
	I0116 02:35:39.853539  476192 main.go:141] libmachine: Making call to close driver server
	I0116 02:35:39.853553  476192 main.go:141] libmachine: (addons-690916) Calling .Close
	I0116 02:35:39.853581  476192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.778850716s)
	I0116 02:35:39.853654  476192 main.go:141] libmachine: Making call to close driver server
	I0116 02:35:39.853670  476192 main.go:141] libmachine: (addons-690916) Calling .Close
	I0116 02:35:39.853884  476192 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:35:39.853912  476192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:35:39.853924  476192 main.go:141] libmachine: Making call to close driver server
	I0116 02:35:39.853934  476192 main.go:141] libmachine: (addons-690916) Calling .Close
	I0116 02:35:39.854026  476192 main.go:141] libmachine: (addons-690916) DBG | Closing plugin on server side
	I0116 02:35:39.854077  476192 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:35:39.854086  476192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:35:39.854107  476192 main.go:141] libmachine: Making call to close driver server
	I0116 02:35:39.854115  476192 main.go:141] libmachine: (addons-690916) Calling .Close
	I0116 02:35:39.854186  476192 main.go:141] libmachine: (addons-690916) DBG | Closing plugin on server side
	I0116 02:35:39.854225  476192 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:35:39.854241  476192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:35:39.854498  476192 main.go:141] libmachine: (addons-690916) DBG | Closing plugin on server side
	I0116 02:35:39.854515  476192 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:35:39.854529  476192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:35:39.925633  476192 addons.go:234] Setting addon gcp-auth=true in "addons-690916"
	I0116 02:35:39.925704  476192 host.go:66] Checking if "addons-690916" exists ...
	I0116 02:35:39.926138  476192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:35:39.926180  476192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:35:39.941623  476192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42795
	I0116 02:35:39.942151  476192 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:35:39.942849  476192 main.go:141] libmachine: Using API Version  1
	I0116 02:35:39.942879  476192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:35:39.943333  476192 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:35:39.944146  476192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:35:39.944196  476192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:35:39.960617  476192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44421
	I0116 02:35:39.961221  476192 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:35:39.961792  476192 main.go:141] libmachine: Using API Version  1
	I0116 02:35:39.961824  476192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:35:39.962225  476192 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:35:39.962453  476192 main.go:141] libmachine: (addons-690916) Calling .GetState
	I0116 02:35:39.964576  476192 main.go:141] libmachine: (addons-690916) Calling .DriverName
	I0116 02:35:39.964867  476192 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0116 02:35:39.964913  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHHostname
	I0116 02:35:39.967940  476192 main.go:141] libmachine: (addons-690916) DBG | domain addons-690916 has defined MAC address 52:54:00:5c:a7:a7 in network mk-addons-690916
	I0116 02:35:39.968353  476192 main.go:141] libmachine: (addons-690916) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:a7:a7", ip: ""} in network mk-addons-690916: {Iface:virbr1 ExpiryTime:2024-01-16 03:34:50 +0000 UTC Type:0 Mac:52:54:00:5c:a7:a7 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-690916 Clientid:01:52:54:00:5c:a7:a7}
	I0116 02:35:39.968383  476192 main.go:141] libmachine: (addons-690916) DBG | domain addons-690916 has defined IP address 192.168.39.234 and MAC address 52:54:00:5c:a7:a7 in network mk-addons-690916
	I0116 02:35:39.968586  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHPort
	I0116 02:35:39.968870  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHKeyPath
	I0116 02:35:39.969061  476192 main.go:141] libmachine: (addons-690916) Calling .GetSSHUsername
	I0116 02:35:39.969247  476192 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/addons-690916/id_rsa Username:docker}
	I0116 02:35:40.130673  476192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.817542736s)
	I0116 02:35:40.130738  476192 main.go:141] libmachine: Making call to close driver server
	I0116 02:35:40.130753  476192 main.go:141] libmachine: (addons-690916) Calling .Close
	I0116 02:35:40.131181  476192 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:35:40.131204  476192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:35:40.131218  476192 main.go:141] libmachine: Making call to close driver server
	I0116 02:35:40.131215  476192 main.go:141] libmachine: (addons-690916) DBG | Closing plugin on server side
	I0116 02:35:40.131231  476192 main.go:141] libmachine: (addons-690916) Calling .Close
	I0116 02:35:40.131580  476192 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:35:40.131598  476192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:35:40.424344  476192 main.go:141] libmachine: Making call to close driver server
	I0116 02:35:40.424376  476192 main.go:141] libmachine: (addons-690916) Calling .Close
	I0116 02:35:40.424735  476192 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:35:40.424760  476192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:35:42.080803  476192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.753147244s)
	I0116 02:35:42.080866  476192 main.go:141] libmachine: Making call to close driver server
	I0116 02:35:42.080868  476192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (9.740596749s)
	I0116 02:35:42.080925  476192 main.go:141] libmachine: Making call to close driver server
	I0116 02:35:42.080928  476192 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (9.673880482s)
	I0116 02:35:42.080942  476192 main.go:141] libmachine: (addons-690916) Calling .Close
	I0116 02:35:42.080880  476192 main.go:141] libmachine: (addons-690916) Calling .Close
	I0116 02:35:42.080969  476192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.405251274s)
	I0116 02:35:42.080994  476192 main.go:141] libmachine: Making call to close driver server
	I0116 02:35:42.081009  476192 main.go:141] libmachine: (addons-690916) Calling .Close
	I0116 02:35:42.081027  476192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (9.191913192s)
	I0116 02:35:42.081045  476192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (9.187403765s)
	I0116 02:35:42.081056  476192 main.go:141] libmachine: Making call to close driver server
	I0116 02:35:42.081064  476192 main.go:141] libmachine: Making call to close driver server
	I0116 02:35:42.081071  476192 main.go:141] libmachine: (addons-690916) Calling .Close
	I0116 02:35:42.081075  476192 main.go:141] libmachine: (addons-690916) Calling .Close
	I0116 02:35:42.081137  476192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (9.073983123s)
	I0116 02:35:42.081157  476192 main.go:141] libmachine: Making call to close driver server
	I0116 02:35:42.081170  476192 main.go:141] libmachine: (addons-690916) Calling .Close
	I0116 02:35:42.081240  476192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (8.997560476s)
	I0116 02:35:42.081269  476192 main.go:141] libmachine: Making call to close driver server
	I0116 02:35:42.081280  476192 main.go:141] libmachine: (addons-690916) Calling .Close
	I0116 02:35:42.081490  476192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (8.915488999s)
	W0116 02:35:42.081525  476192 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0116 02:35:42.081548  476192 retry.go:31] will retry after 273.644409ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0116 02:35:42.081633  476192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (8.721157997s)
	I0116 02:35:42.081656  476192 main.go:141] libmachine: Making call to close driver server
	I0116 02:35:42.081665  476192 main.go:141] libmachine: (addons-690916) Calling .Close
	I0116 02:35:42.082111  476192 node_ready.go:35] waiting up to 6m0s for node "addons-690916" to be "Ready" ...
	I0116 02:35:42.083808  476192 main.go:141] libmachine: (addons-690916) DBG | Closing plugin on server side
	I0116 02:35:42.083825  476192 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:35:42.083833  476192 main.go:141] libmachine: (addons-690916) DBG | Closing plugin on server side
	I0116 02:35:42.083840  476192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:35:42.083851  476192 main.go:141] libmachine: Making call to close driver server
	I0116 02:35:42.083860  476192 main.go:141] libmachine: (addons-690916) Calling .Close
	I0116 02:35:42.083874  476192 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:35:42.083882  476192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:35:42.083893  476192 main.go:141] libmachine: Making call to close driver server
	I0116 02:35:42.083902  476192 main.go:141] libmachine: (addons-690916) Calling .Close
	I0116 02:35:42.084025  476192 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:35:42.084050  476192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:35:42.084060  476192 main.go:141] libmachine: Making call to close driver server
	I0116 02:35:42.084068  476192 main.go:141] libmachine: (addons-690916) Calling .Close
	I0116 02:35:42.084106  476192 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:35:42.084120  476192 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:35:42.084121  476192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:35:42.084128  476192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:35:42.084138  476192 main.go:141] libmachine: Making call to close driver server
	I0116 02:35:42.084145  476192 main.go:141] libmachine: (addons-690916) Calling .Close
	I0116 02:35:42.084193  476192 main.go:141] libmachine: (addons-690916) DBG | Closing plugin on server side
	I0116 02:35:42.084215  476192 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:35:42.084231  476192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:35:42.084241  476192 main.go:141] libmachine: Making call to close driver server
	I0116 02:35:42.084248  476192 main.go:141] libmachine: (addons-690916) Calling .Close
	I0116 02:35:42.084305  476192 main.go:141] libmachine: (addons-690916) DBG | Closing plugin on server side
	I0116 02:35:42.084337  476192 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:35:42.084346  476192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:35:42.084744  476192 main.go:141] libmachine: (addons-690916) DBG | Closing plugin on server side
	I0116 02:35:42.084771  476192 main.go:141] libmachine: (addons-690916) DBG | Closing plugin on server side
	I0116 02:35:42.084797  476192 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:35:42.084806  476192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:35:42.084816  476192 addons.go:470] Verifying addon metrics-server=true in "addons-690916"
	I0116 02:35:42.084859  476192 main.go:141] libmachine: (addons-690916) DBG | Closing plugin on server side
	I0116 02:35:42.084881  476192 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:35:42.084890  476192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:35:42.084895  476192 addons.go:470] Verifying addon registry=true in "addons-690916"
	I0116 02:35:42.086965  476192 out.go:177] * Verifying registry addon...
	I0116 02:35:42.085204  476192 main.go:141] libmachine: (addons-690916) DBG | Closing plugin on server side
	I0116 02:35:42.085233  476192 main.go:141] libmachine: (addons-690916) DBG | Closing plugin on server side
	I0116 02:35:42.085272  476192 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:35:42.085299  476192 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:35:42.085317  476192 main.go:141] libmachine: (addons-690916) DBG | Closing plugin on server side
	I0116 02:35:42.085340  476192 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:35:42.085366  476192 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:35:42.088400  476192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:35:42.088415  476192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:35:42.088431  476192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:35:42.088449  476192 main.go:141] libmachine: Making call to close driver server
	I0116 02:35:42.088434  476192 main.go:141] libmachine: Making call to close driver server
	I0116 02:35:42.088458  476192 main.go:141] libmachine: (addons-690916) Calling .Close
	I0116 02:35:42.088478  476192 main.go:141] libmachine: (addons-690916) Calling .Close
	I0116 02:35:42.088420  476192 main.go:141] libmachine: Making call to close driver server
	I0116 02:35:42.088523  476192 main.go:141] libmachine: (addons-690916) Calling .Close
	I0116 02:35:42.088448  476192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:35:42.090230  476192 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-690916 service yakd-dashboard -n yakd-dashboard
	
	I0116 02:35:42.088897  476192 main.go:141] libmachine: (addons-690916) DBG | Closing plugin on server side
	I0116 02:35:42.088901  476192 main.go:141] libmachine: (addons-690916) DBG | Closing plugin on server side
	I0116 02:35:42.088917  476192 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:35:42.088921  476192 main.go:141] libmachine: (addons-690916) DBG | Closing plugin on server side
	I0116 02:35:42.088939  476192 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:35:42.088969  476192 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:35:42.089495  476192 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0116 02:35:42.091653  476192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:35:42.091660  476192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:35:42.091674  476192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:35:42.091688  476192 addons.go:470] Verifying addon ingress=true in "addons-690916"
	I0116 02:35:42.093214  476192 out.go:177] * Verifying ingress addon...
	I0116 02:35:42.095798  476192 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0116 02:35:42.141767  476192 node_ready.go:49] node "addons-690916" has status "Ready":"True"
	I0116 02:35:42.141796  476192 node_ready.go:38] duration metric: took 59.655032ms waiting for node "addons-690916" to be "Ready" ...
	I0116 02:35:42.141809  476192 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 02:35:42.158546  476192 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0116 02:35:42.158579  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:35:42.169829  476192 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0116 02:35:42.169853  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:35:42.183742  476192 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-sx897" in "kube-system" namespace to be "Ready" ...
	I0116 02:35:42.187303  476192 main.go:141] libmachine: Making call to close driver server
	I0116 02:35:42.187332  476192 main.go:141] libmachine: (addons-690916) Calling .Close
	I0116 02:35:42.187696  476192 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:35:42.187724  476192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:35:42.187758  476192 main.go:141] libmachine: (addons-690916) DBG | Closing plugin on server side
	I0116 02:35:42.356311  476192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0116 02:35:42.675777  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:35:42.690526  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:35:43.024714  476192 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.059819163s)
	I0116 02:35:43.026492  476192 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0116 02:35:43.025126  476192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (9.436566535s)
	I0116 02:35:43.028327  476192 main.go:141] libmachine: Making call to close driver server
	I0116 02:35:43.028348  476192 main.go:141] libmachine: (addons-690916) Calling .Close
	I0116 02:35:43.031714  476192 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0116 02:35:43.028796  476192 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:35:43.028802  476192 main.go:141] libmachine: (addons-690916) DBG | Closing plugin on server side
	I0116 02:35:43.033069  476192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:35:43.033081  476192 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0116 02:35:43.033098  476192 main.go:141] libmachine: Making call to close driver server
	I0116 02:35:43.033109  476192 main.go:141] libmachine: (addons-690916) Calling .Close
	I0116 02:35:43.033097  476192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0116 02:35:43.033474  476192 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:35:43.033492  476192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:35:43.033505  476192 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-690916"
	I0116 02:35:43.035094  476192 out.go:177] * Verifying csi-hostpath-driver addon...
	I0116 02:35:43.037215  476192 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0116 02:35:43.076453  476192 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0116 02:35:43.076479  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:35:43.123784  476192 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0116 02:35:43.123810  476192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0116 02:35:43.148229  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:35:43.165475  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:35:43.205656  476192 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0116 02:35:43.205692  476192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I0116 02:35:43.279788  476192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0116 02:35:43.569166  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:35:43.600053  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:35:43.617355  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:35:44.143525  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:35:44.184466  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:35:44.197715  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:35:44.206626  476192 pod_ready.go:102] pod "coredns-5dd5756b68-sx897" in "kube-system" namespace has status "Ready":"False"
	I0116 02:35:44.547725  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:35:44.601359  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:35:44.612716  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:35:45.090707  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:35:45.110745  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:35:45.115876  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:35:45.537952  476192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.181583637s)
	I0116 02:35:45.538095  476192 main.go:141] libmachine: Making call to close driver server
	I0116 02:35:45.538127  476192 main.go:141] libmachine: (addons-690916) Calling .Close
	I0116 02:35:45.538440  476192 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:35:45.538462  476192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:35:45.538475  476192 main.go:141] libmachine: Making call to close driver server
	I0116 02:35:45.538487  476192 main.go:141] libmachine: (addons-690916) Calling .Close
	I0116 02:35:45.538743  476192 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:35:45.538759  476192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:35:45.597749  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:35:45.622312  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:35:45.623073  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:35:45.693671  476192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.413827002s)
	I0116 02:35:45.693734  476192 main.go:141] libmachine: Making call to close driver server
	I0116 02:35:45.693749  476192 main.go:141] libmachine: (addons-690916) Calling .Close
	I0116 02:35:45.694126  476192 main.go:141] libmachine: (addons-690916) DBG | Closing plugin on server side
	I0116 02:35:45.694171  476192 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:35:45.694186  476192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:35:45.694200  476192 main.go:141] libmachine: Making call to close driver server
	I0116 02:35:45.694210  476192 main.go:141] libmachine: (addons-690916) Calling .Close
	I0116 02:35:45.694450  476192 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:35:45.694468  476192 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:35:45.694488  476192 main.go:141] libmachine: (addons-690916) DBG | Closing plugin on server side
	I0116 02:35:45.695677  476192 addons.go:470] Verifying addon gcp-auth=true in "addons-690916"
	I0116 02:35:45.697934  476192 out.go:177] * Verifying gcp-auth addon...
	I0116 02:35:45.700144  476192 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0116 02:35:45.764170  476192 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0116 02:35:45.764201  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:35:46.056228  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:35:46.116993  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:35:46.117182  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:35:46.210250  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:35:46.543381  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:35:46.601968  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:35:46.604264  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:35:46.694730  476192 pod_ready.go:102] pod "coredns-5dd5756b68-sx897" in "kube-system" namespace has status "Ready":"False"
	I0116 02:35:46.714719  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:35:47.046266  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:35:47.107835  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:35:47.110062  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:35:47.203832  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:35:47.545730  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:35:47.598970  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:35:47.603906  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:35:47.709590  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:35:48.049963  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:35:48.136645  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:35:48.140743  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:35:48.210690  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:35:48.554417  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:35:48.612947  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:35:48.620971  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:35:48.701888  476192 pod_ready.go:102] pod "coredns-5dd5756b68-sx897" in "kube-system" namespace has status "Ready":"False"
	I0116 02:35:48.710480  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:35:49.054516  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:35:49.107352  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:35:49.115187  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:35:49.205729  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:35:49.546972  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:35:49.607686  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:35:49.608112  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:35:49.704957  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:35:50.043873  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:35:50.103160  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:35:50.113671  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:35:50.204081  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:35:50.550297  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:35:50.607927  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:35:50.608096  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:35:50.704929  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:35:51.051460  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:35:51.100971  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:35:51.103780  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:35:51.192989  476192 pod_ready.go:102] pod "coredns-5dd5756b68-sx897" in "kube-system" namespace has status "Ready":"False"
	I0116 02:35:51.209488  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:35:51.545260  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:35:51.609471  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:35:51.615586  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:35:51.707040  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:35:52.052329  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:35:52.098765  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:35:52.102453  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:35:52.464493  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:35:52.544387  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:35:52.603709  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:35:52.610493  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:35:52.708099  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:35:53.048937  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:35:53.096495  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:35:53.101287  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:35:53.194034  476192 pod_ready.go:102] pod "coredns-5dd5756b68-sx897" in "kube-system" namespace has status "Ready":"False"
	I0116 02:35:53.204318  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:35:53.562566  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:35:53.601590  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:35:53.607682  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:35:53.710622  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:35:54.043630  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:35:54.096419  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:35:54.099703  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:35:54.207655  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:35:54.545575  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:35:54.596496  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:35:54.600674  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:35:54.712593  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:35:55.045232  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:35:55.112352  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:35:55.113814  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:35:55.197023  476192 pod_ready.go:102] pod "coredns-5dd5756b68-sx897" in "kube-system" namespace has status "Ready":"False"
	I0116 02:35:55.209350  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:35:55.551876  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:35:55.616154  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:35:55.620141  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:35:55.739168  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:35:55.746947  476192 pod_ready.go:92] pod "coredns-5dd5756b68-sx897" in "kube-system" namespace has status "Ready":"True"
	I0116 02:35:55.746978  476192 pod_ready.go:81] duration metric: took 13.563203246s waiting for pod "coredns-5dd5756b68-sx897" in "kube-system" namespace to be "Ready" ...
	I0116 02:35:55.746992  476192 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-690916" in "kube-system" namespace to be "Ready" ...
	I0116 02:35:55.763380  476192 pod_ready.go:92] pod "etcd-addons-690916" in "kube-system" namespace has status "Ready":"True"
	I0116 02:35:55.763411  476192 pod_ready.go:81] duration metric: took 16.410726ms waiting for pod "etcd-addons-690916" in "kube-system" namespace to be "Ready" ...
	I0116 02:35:55.763424  476192 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-690916" in "kube-system" namespace to be "Ready" ...
	I0116 02:35:55.783095  476192 pod_ready.go:92] pod "kube-apiserver-addons-690916" in "kube-system" namespace has status "Ready":"True"
	I0116 02:35:55.783126  476192 pod_ready.go:81] duration metric: took 19.69336ms waiting for pod "kube-apiserver-addons-690916" in "kube-system" namespace to be "Ready" ...
	I0116 02:35:55.783139  476192 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-690916" in "kube-system" namespace to be "Ready" ...
	I0116 02:35:55.803708  476192 pod_ready.go:92] pod "kube-controller-manager-addons-690916" in "kube-system" namespace has status "Ready":"True"
	I0116 02:35:55.803735  476192 pod_ready.go:81] duration metric: took 20.587335ms waiting for pod "kube-controller-manager-addons-690916" in "kube-system" namespace to be "Ready" ...
	I0116 02:35:55.803751  476192 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xmxx2" in "kube-system" namespace to be "Ready" ...
	I0116 02:35:55.819233  476192 pod_ready.go:92] pod "kube-proxy-xmxx2" in "kube-system" namespace has status "Ready":"True"
	I0116 02:35:55.819262  476192 pod_ready.go:81] duration metric: took 15.50383ms waiting for pod "kube-proxy-xmxx2" in "kube-system" namespace to be "Ready" ...
	I0116 02:35:55.819274  476192 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-690916" in "kube-system" namespace to be "Ready" ...
	I0116 02:35:56.063552  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:35:56.094566  476192 pod_ready.go:92] pod "kube-scheduler-addons-690916" in "kube-system" namespace has status "Ready":"True"
	I0116 02:35:56.094595  476192 pod_ready.go:81] duration metric: took 275.313011ms waiting for pod "kube-scheduler-addons-690916" in "kube-system" namespace to be "Ready" ...
	I0116 02:35:56.094606  476192 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-7c66d45ddc-9nqgd" in "kube-system" namespace to be "Ready" ...
	I0116 02:35:56.098077  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:35:56.102103  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:35:56.206635  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:35:56.546611  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:35:56.598387  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:35:56.602091  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:35:56.709227  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:35:57.062168  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:35:57.110224  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:35:57.110315  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:35:57.204785  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:35:57.544267  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:35:57.602306  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:35:57.605766  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:35:57.704397  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:35:58.061168  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:35:58.124158  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:35:58.124158  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:35:58.148885  476192 pod_ready.go:102] pod "metrics-server-7c66d45ddc-9nqgd" in "kube-system" namespace has status "Ready":"False"
	I0116 02:35:58.219534  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:35:58.552498  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:35:58.613815  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:35:58.639015  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:35:58.716370  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:35:59.053471  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:35:59.110792  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:35:59.134341  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:35:59.206674  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:35:59.548602  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:35:59.613551  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:35:59.615818  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:35:59.711836  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:00.045052  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:00.097944  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:00.103322  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:00.300662  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:00.624195  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:00.627681  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:00.631954  476192 pod_ready.go:102] pod "metrics-server-7c66d45ddc-9nqgd" in "kube-system" namespace has status "Ready":"False"
	I0116 02:36:00.636310  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:00.704722  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:01.044497  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:01.100974  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:01.113212  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:01.205355  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:01.543942  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:01.604296  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:01.625378  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:01.705328  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:02.047095  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:02.100070  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:02.103149  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:02.205233  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:02.787643  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:02.788265  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:02.788960  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:02.792508  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:02.809384  476192 pod_ready.go:102] pod "metrics-server-7c66d45ddc-9nqgd" in "kube-system" namespace has status "Ready":"False"
	I0116 02:36:03.050937  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:03.101364  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:03.102505  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:03.204422  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:03.547769  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:03.598295  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:03.609754  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:03.705913  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:04.044000  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:04.099358  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:04.102374  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:04.204769  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:04.548023  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:04.611491  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:04.613205  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:04.712601  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:05.044503  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:05.097302  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:05.106677  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:05.115124  476192 pod_ready.go:102] pod "metrics-server-7c66d45ddc-9nqgd" in "kube-system" namespace has status "Ready":"False"
	I0116 02:36:05.207277  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:05.543772  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:05.598511  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:05.601499  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:05.704388  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:06.043362  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:06.097736  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:06.100809  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:06.204464  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:06.543459  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:06.598932  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:06.603346  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:06.704448  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:07.044611  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:07.099771  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:07.103350  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:07.206967  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:07.543593  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:07.597183  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:07.602898  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:07.609316  476192 pod_ready.go:102] pod "metrics-server-7c66d45ddc-9nqgd" in "kube-system" namespace has status "Ready":"False"
	I0116 02:36:07.716897  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:08.044369  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:08.101719  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:08.105821  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:08.206421  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:08.547397  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:08.596337  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:08.600585  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:08.704410  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:09.044271  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:09.097154  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:09.102640  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:09.205372  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:09.543634  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:09.597881  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:09.600538  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:09.704364  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:10.047786  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:10.109112  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:10.110145  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:10.116276  476192 pod_ready.go:102] pod "metrics-server-7c66d45ddc-9nqgd" in "kube-system" namespace has status "Ready":"False"
	I0116 02:36:10.206688  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:10.543214  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:10.597173  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:10.602865  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:10.703911  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:11.047425  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:11.096705  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:11.100568  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:11.209116  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:11.545041  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:11.597621  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:11.603689  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:11.704871  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:12.045014  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:12.098960  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:12.103964  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:12.211943  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:12.544476  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:12.597707  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:12.606360  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:12.609243  476192 pod_ready.go:102] pod "metrics-server-7c66d45ddc-9nqgd" in "kube-system" namespace has status "Ready":"False"
	I0116 02:36:12.736150  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:13.044225  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:13.097157  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:13.117605  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:13.205146  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:13.543998  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:13.597318  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:13.602517  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:13.706717  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:14.045035  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:14.104419  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:14.105142  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:14.108110  476192 pod_ready.go:92] pod "metrics-server-7c66d45ddc-9nqgd" in "kube-system" namespace has status "Ready":"True"
	I0116 02:36:14.108137  476192 pod_ready.go:81] duration metric: took 18.013524607s waiting for pod "metrics-server-7c66d45ddc-9nqgd" in "kube-system" namespace to be "Ready" ...
	I0116 02:36:14.108150  476192 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-p8gsf" in "kube-system" namespace to be "Ready" ...
	I0116 02:36:14.207767  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:14.545375  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:14.602822  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:14.605938  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:14.705432  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:15.047025  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:15.097716  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:15.103465  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:15.206388  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:15.543839  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:15.599646  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:15.601434  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:15.705488  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:16.043658  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:16.113577  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:16.113891  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:16.157937  476192 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-p8gsf" in "kube-system" namespace has status "Ready":"False"
	I0116 02:36:16.204892  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:16.545749  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:16.596707  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:16.603922  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:16.704993  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:17.044431  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:17.098959  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:17.101393  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:17.204546  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:17.543989  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:17.597972  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:17.601964  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:17.705280  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:18.043782  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:18.099385  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:18.102358  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:18.205144  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:18.543857  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:18.597826  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:18.600934  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:18.614396  476192 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-p8gsf" in "kube-system" namespace has status "Ready":"False"
	I0116 02:36:18.704005  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:19.046179  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:19.097761  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:19.104440  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:19.204393  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:19.543617  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:19.596815  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:19.599795  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:19.704258  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:20.047484  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:20.481976  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:20.482300  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:20.482419  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:20.544549  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:20.599560  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:20.603997  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:20.626418  476192 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-p8gsf" in "kube-system" namespace has status "Ready":"False"
	I0116 02:36:20.704832  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:21.044113  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:21.097640  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:21.101590  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:21.206010  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:21.546035  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:21.597689  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:21.603767  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:21.713089  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:22.043187  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:22.099181  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:22.101724  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:22.205612  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:22.545000  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:22.600620  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:22.612263  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:22.713933  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:23.043588  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:23.099206  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:23.131291  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:23.154127  476192 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-p8gsf" in "kube-system" namespace has status "Ready":"False"
	I0116 02:36:23.204995  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:23.544612  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:23.597100  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:23.601199  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:23.704304  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:24.047678  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:24.143409  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:24.164819  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:24.206326  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:24.543646  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:24.596721  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:24.600008  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:24.705563  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:25.044538  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:25.100367  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:25.124226  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:25.272412  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:25.543461  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:25.600640  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:25.601589  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:25.614933  476192 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-p8gsf" in "kube-system" namespace has status "Ready":"False"
	I0116 02:36:25.704636  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:26.044634  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:26.106155  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:26.109589  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:26.556561  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:26.588861  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:26.620599  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:26.638034  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:26.704185  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:27.043536  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:27.104715  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:27.105030  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:27.204614  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:27.543842  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:27.597298  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:27.600660  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:27.617237  476192 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-p8gsf" in "kube-system" namespace has status "Ready":"False"
	I0116 02:36:27.704724  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:28.043701  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:28.103295  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:28.105480  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:28.226162  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:28.548298  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:28.598460  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:28.601341  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:28.704298  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:29.044880  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:29.097379  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:29.100790  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:29.204586  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:29.543804  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:29.597806  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:29.602567  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:29.704362  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:30.048703  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:30.098355  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:30.101295  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:30.119799  476192 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-p8gsf" in "kube-system" namespace has status "Ready":"False"
	I0116 02:36:30.220257  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:30.545043  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:30.600760  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:30.601701  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:30.704869  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:31.044266  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:31.097833  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:31.102418  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:31.204411  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:31.542660  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:31.600466  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:31.602262  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:31.705412  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:32.043374  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:32.097721  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:32.101030  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:32.207258  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:32.543362  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:32.599661  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:32.601876  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:32.615569  476192 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-p8gsf" in "kube-system" namespace has status "Ready":"False"
	I0116 02:36:32.704013  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:33.046091  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:33.097664  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:33.101140  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:33.207231  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:33.547338  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:33.599192  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:33.601651  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:33.705012  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:34.048050  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:34.100881  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:34.103662  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:34.205602  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:34.545025  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:34.597688  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:34.605804  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:34.620826  476192 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-p8gsf" in "kube-system" namespace has status "Ready":"False"
	I0116 02:36:34.705222  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:35.044488  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:35.096439  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:35.100010  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:35.204390  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:35.544847  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:35.599820  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:35.604361  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:35.705722  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:36.043338  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:36.097799  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:36.103798  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:36.204837  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:36.544835  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:36.598771  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:36.600614  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:36.621454  476192 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-p8gsf" in "kube-system" namespace has status "Ready":"False"
	I0116 02:36:36.703961  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:37.045286  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:37.097664  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:37.102948  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:37.204335  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:37.543147  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:37.597347  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:37.600449  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:37.704683  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:38.204255  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:38.204349  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:38.204557  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:38.302352  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:38.544319  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:38.601261  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:38.602795  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:38.704227  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:39.044500  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:39.097369  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:39.100275  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:39.115045  476192 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-p8gsf" in "kube-system" namespace has status "Ready":"False"
	I0116 02:36:39.205358  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:39.547862  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:39.601996  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:39.602441  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:39.704839  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:40.043327  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:40.098944  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:40.100741  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:40.204491  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:40.543264  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:40.601493  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:40.601857  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:40.706810  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:41.045013  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:41.104537  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:41.110648  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:41.122484  476192 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-p8gsf" in "kube-system" namespace has status "Ready":"False"
	I0116 02:36:41.204383  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:41.611588  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:41.638117  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:41.642913  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:41.704683  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:42.049719  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:42.099731  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:42.101779  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:42.204587  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:42.552749  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:42.602524  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:42.605849  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:42.618375  476192 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-p8gsf" in "kube-system" namespace has status "Ready":"True"
	I0116 02:36:42.618398  476192 pod_ready.go:81] duration metric: took 28.51024014s waiting for pod "nvidia-device-plugin-daemonset-p8gsf" in "kube-system" namespace to be "Ready" ...
	I0116 02:36:42.618416  476192 pod_ready.go:38] duration metric: took 1m0.47659772s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 02:36:42.618434  476192 api_server.go:52] waiting for apiserver process to appear ...
	I0116 02:36:42.618466  476192 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0116 02:36:42.618552  476192 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0116 02:36:42.689722  476192 cri.go:89] found id: "842fe1c35d9878a3736740b28b8dfddb4c69474338c89befd7e7bd1550543ff5"
	I0116 02:36:42.689759  476192 cri.go:89] found id: ""
	I0116 02:36:42.689769  476192 logs.go:284] 1 containers: [842fe1c35d9878a3736740b28b8dfddb4c69474338c89befd7e7bd1550543ff5]
	I0116 02:36:42.689827  476192 ssh_runner.go:195] Run: which crictl
	I0116 02:36:42.694814  476192 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0116 02:36:42.694876  476192 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0116 02:36:42.705426  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:42.754769  476192 cri.go:89] found id: "ae1a246302684304eb4b0325a78d21d75927c673f5bf6dab42f5f6a7e13a1724"
	I0116 02:36:42.754789  476192 cri.go:89] found id: ""
	I0116 02:36:42.754796  476192 logs.go:284] 1 containers: [ae1a246302684304eb4b0325a78d21d75927c673f5bf6dab42f5f6a7e13a1724]
	I0116 02:36:42.754852  476192 ssh_runner.go:195] Run: which crictl
	I0116 02:36:42.759228  476192 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0116 02:36:42.759307  476192 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0116 02:36:42.808384  476192 cri.go:89] found id: "6f7f3c925be8f812052f6b159e0958056ba30f6cc1a7b57d33b9d33f4f537bab"
	I0116 02:36:42.808415  476192 cri.go:89] found id: ""
	I0116 02:36:42.808427  476192 logs.go:284] 1 containers: [6f7f3c925be8f812052f6b159e0958056ba30f6cc1a7b57d33b9d33f4f537bab]
	I0116 02:36:42.808497  476192 ssh_runner.go:195] Run: which crictl
	I0116 02:36:42.813173  476192 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0116 02:36:42.813250  476192 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0116 02:36:42.856118  476192 cri.go:89] found id: "2018f9809e3cb61e90992f22bec6870c1dd349160485490c8f8b562369ccd92b"
	I0116 02:36:42.856151  476192 cri.go:89] found id: ""
	I0116 02:36:42.856164  476192 logs.go:284] 1 containers: [2018f9809e3cb61e90992f22bec6870c1dd349160485490c8f8b562369ccd92b]
	I0116 02:36:42.856233  476192 ssh_runner.go:195] Run: which crictl
	I0116 02:36:42.860714  476192 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0116 02:36:42.860806  476192 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0116 02:36:42.909303  476192 cri.go:89] found id: "184c590322abeab5df31eaa66f8a2c1dae362cbb1a6a48269c17dd14ce24bb80"
	I0116 02:36:42.909330  476192 cri.go:89] found id: ""
	I0116 02:36:42.909339  476192 logs.go:284] 1 containers: [184c590322abeab5df31eaa66f8a2c1dae362cbb1a6a48269c17dd14ce24bb80]
	I0116 02:36:42.909391  476192 ssh_runner.go:195] Run: which crictl
	I0116 02:36:42.914977  476192 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0116 02:36:42.915044  476192 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0116 02:36:42.970344  476192 cri.go:89] found id: "3ad9676ac4a1b651ca9f614f61a4e788cd41ef1a58cb447881e85401e9e9065d"
	I0116 02:36:42.970369  476192 cri.go:89] found id: ""
	I0116 02:36:42.970379  476192 logs.go:284] 1 containers: [3ad9676ac4a1b651ca9f614f61a4e788cd41ef1a58cb447881e85401e9e9065d]
	I0116 02:36:42.970448  476192 ssh_runner.go:195] Run: which crictl
	I0116 02:36:42.974999  476192 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0116 02:36:42.975063  476192 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0116 02:36:43.024427  476192 cri.go:89] found id: ""
	I0116 02:36:43.024457  476192 logs.go:284] 0 containers: []
	W0116 02:36:43.024466  476192 logs.go:286] No container was found matching "kindnet"
	I0116 02:36:43.024478  476192 logs.go:123] Gathering logs for dmesg ...
	I0116 02:36:43.024539  476192 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0116 02:36:43.044779  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:43.048612  476192 logs.go:123] Gathering logs for kube-apiserver [842fe1c35d9878a3736740b28b8dfddb4c69474338c89befd7e7bd1550543ff5] ...
	I0116 02:36:43.048647  476192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 842fe1c35d9878a3736740b28b8dfddb4c69474338c89befd7e7bd1550543ff5"
	I0116 02:36:43.102198  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:43.107808  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:43.155021  476192 logs.go:123] Gathering logs for etcd [ae1a246302684304eb4b0325a78d21d75927c673f5bf6dab42f5f6a7e13a1724] ...
	I0116 02:36:43.155084  476192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae1a246302684304eb4b0325a78d21d75927c673f5bf6dab42f5f6a7e13a1724"
	I0116 02:36:43.205125  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:43.233637  476192 logs.go:123] Gathering logs for kube-proxy [184c590322abeab5df31eaa66f8a2c1dae362cbb1a6a48269c17dd14ce24bb80] ...
	I0116 02:36:43.233682  476192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 184c590322abeab5df31eaa66f8a2c1dae362cbb1a6a48269c17dd14ce24bb80"
	I0116 02:36:43.278780  476192 logs.go:123] Gathering logs for kube-controller-manager [3ad9676ac4a1b651ca9f614f61a4e788cd41ef1a58cb447881e85401e9e9065d] ...
	I0116 02:36:43.278814  476192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ad9676ac4a1b651ca9f614f61a4e788cd41ef1a58cb447881e85401e9e9065d"
	I0116 02:36:43.342659  476192 logs.go:123] Gathering logs for CRI-O ...
	I0116 02:36:43.342711  476192 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0116 02:36:43.544288  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:43.597319  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:43.603011  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:43.635332  476192 logs.go:123] Gathering logs for container status ...
	I0116 02:36:43.635388  476192 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0116 02:36:43.704282  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:43.741714  476192 logs.go:123] Gathering logs for kubelet ...
	I0116 02:36:43.741758  476192 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0116 02:36:43.795265  476192 logs.go:138] Found kubelet problem: Jan 16 02:35:33 addons-690916 kubelet[1250]: W0116 02:35:33.555525    1250 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-690916" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-690916' and this object
	W0116 02:36:43.795431  476192 logs.go:138] Found kubelet problem: Jan 16 02:35:33 addons-690916 kubelet[1250]: E0116 02:35:33.555577    1250 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-690916" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-690916' and this object
	W0116 02:36:43.812028  476192 logs.go:138] Found kubelet problem: Jan 16 02:35:45 addons-690916 kubelet[1250]: W0116 02:35:45.700056    1250 reflector.go:535] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-690916" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-690916' and this object
	W0116 02:36:43.812212  476192 logs.go:138] Found kubelet problem: Jan 16 02:35:45 addons-690916 kubelet[1250]: E0116 02:35:45.700091    1250 reflector.go:147] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-690916" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-690916' and this object
	I0116 02:36:43.827531  476192 logs.go:123] Gathering logs for describe nodes ...
	I0116 02:36:43.827558  476192 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0116 02:36:44.000827  476192 logs.go:123] Gathering logs for coredns [6f7f3c925be8f812052f6b159e0958056ba30f6cc1a7b57d33b9d33f4f537bab] ...
	I0116 02:36:44.000857  476192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f7f3c925be8f812052f6b159e0958056ba30f6cc1a7b57d33b9d33f4f537bab"
	I0116 02:36:44.041849  476192 logs.go:123] Gathering logs for kube-scheduler [2018f9809e3cb61e90992f22bec6870c1dd349160485490c8f8b562369ccd92b] ...
	I0116 02:36:44.041891  476192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2018f9809e3cb61e90992f22bec6870c1dd349160485490c8f8b562369ccd92b"
	I0116 02:36:44.076569  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:44.107449  476192 out.go:309] Setting ErrFile to fd 2...
	I0116 02:36:44.107489  476192 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0116 02:36:44.107594  476192 out.go:239] X Problems detected in kubelet:
	W0116 02:36:44.107612  476192 out.go:239]   Jan 16 02:35:33 addons-690916 kubelet[1250]: W0116 02:35:33.555525    1250 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-690916" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-690916' and this object
	W0116 02:36:44.107625  476192 out.go:239]   Jan 16 02:35:33 addons-690916 kubelet[1250]: E0116 02:35:33.555577    1250 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-690916" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-690916' and this object
	W0116 02:36:44.107640  476192 out.go:239]   Jan 16 02:35:45 addons-690916 kubelet[1250]: W0116 02:35:45.700056    1250 reflector.go:535] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-690916" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-690916' and this object
	W0116 02:36:44.107655  476192 out.go:239]   Jan 16 02:35:45 addons-690916 kubelet[1250]: E0116 02:35:45.700091    1250 reflector.go:147] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-690916" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-690916' and this object
	I0116 02:36:44.107663  476192 out.go:309] Setting ErrFile to fd 2...
	I0116 02:36:44.107671  476192 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:36:44.123117  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:44.123880  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:44.218358  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:44.544338  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:44.598356  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:44.600891  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:44.704514  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:45.043452  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:45.096295  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:45.102924  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:45.204335  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:45.545016  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:45.597214  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:45.602350  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:45.705410  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:46.044029  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:46.097608  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:46.101543  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:46.204881  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:46.543751  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:46.601301  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:46.602503  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:46.706238  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:47.218253  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:47.221898  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:47.226891  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:47.270449  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:47.543388  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:47.597685  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:47.602208  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:47.704532  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:48.044274  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:48.097245  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:48.105821  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:48.206696  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:48.544272  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:48.597338  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:48.600544  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:48.705889  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:49.046396  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:49.097248  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:49.102063  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:49.204883  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:49.545103  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:49.597609  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:49.601399  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:49.734570  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:50.068573  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:50.096681  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 02:36:50.104757  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:50.210527  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:50.545008  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:50.601616  476192 kapi.go:107] duration metric: took 1m8.512117798s to wait for kubernetes.io/minikube-addons=registry ...
	I0116 02:36:50.605188  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:50.704566  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:51.044824  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:51.101298  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:51.204481  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:51.546825  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:51.601032  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:51.704929  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:52.046356  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:52.101685  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:52.204013  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:52.546230  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:52.614903  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:52.708356  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:53.043138  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:53.107724  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:53.214683  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:53.542731  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:53.600345  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:53.704817  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:54.044767  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:54.100697  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:54.108775  476192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 02:36:54.178218  476192 api_server.go:72] duration metric: took 1m21.775745859s to wait for apiserver process to appear ...
	I0116 02:36:54.178306  476192 api_server.go:88] waiting for apiserver healthz status ...
	I0116 02:36:54.178364  476192 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0116 02:36:54.178518  476192 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0116 02:36:54.207267  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:54.345608  476192 cri.go:89] found id: "842fe1c35d9878a3736740b28b8dfddb4c69474338c89befd7e7bd1550543ff5"
	I0116 02:36:54.345631  476192 cri.go:89] found id: ""
	I0116 02:36:54.345640  476192 logs.go:284] 1 containers: [842fe1c35d9878a3736740b28b8dfddb4c69474338c89befd7e7bd1550543ff5]
	I0116 02:36:54.345710  476192 ssh_runner.go:195] Run: which crictl
	I0116 02:36:54.360068  476192 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0116 02:36:54.360151  476192 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0116 02:36:54.463276  476192 cri.go:89] found id: "ae1a246302684304eb4b0325a78d21d75927c673f5bf6dab42f5f6a7e13a1724"
	I0116 02:36:54.463310  476192 cri.go:89] found id: ""
	I0116 02:36:54.463318  476192 logs.go:284] 1 containers: [ae1a246302684304eb4b0325a78d21d75927c673f5bf6dab42f5f6a7e13a1724]
	I0116 02:36:54.463373  476192 ssh_runner.go:195] Run: which crictl
	I0116 02:36:54.469571  476192 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0116 02:36:54.469647  476192 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0116 02:36:54.544241  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:54.601052  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:54.705144  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:54.721959  476192 cri.go:89] found id: "6f7f3c925be8f812052f6b159e0958056ba30f6cc1a7b57d33b9d33f4f537bab"
	I0116 02:36:54.721990  476192 cri.go:89] found id: ""
	I0116 02:36:54.722003  476192 logs.go:284] 1 containers: [6f7f3c925be8f812052f6b159e0958056ba30f6cc1a7b57d33b9d33f4f537bab]
	I0116 02:36:54.722070  476192 ssh_runner.go:195] Run: which crictl
	I0116 02:36:54.736209  476192 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0116 02:36:54.736309  476192 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0116 02:36:54.893706  476192 cri.go:89] found id: "2018f9809e3cb61e90992f22bec6870c1dd349160485490c8f8b562369ccd92b"
	I0116 02:36:54.893737  476192 cri.go:89] found id: ""
	I0116 02:36:54.893748  476192 logs.go:284] 1 containers: [2018f9809e3cb61e90992f22bec6870c1dd349160485490c8f8b562369ccd92b]
	I0116 02:36:54.893817  476192 ssh_runner.go:195] Run: which crictl
	I0116 02:36:54.906530  476192 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0116 02:36:54.906614  476192 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0116 02:36:55.045113  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:55.100723  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:55.188479  476192 cri.go:89] found id: "184c590322abeab5df31eaa66f8a2c1dae362cbb1a6a48269c17dd14ce24bb80"
	I0116 02:36:55.188503  476192 cri.go:89] found id: ""
	I0116 02:36:55.188511  476192 logs.go:284] 1 containers: [184c590322abeab5df31eaa66f8a2c1dae362cbb1a6a48269c17dd14ce24bb80]
	I0116 02:36:55.188562  476192 ssh_runner.go:195] Run: which crictl
	I0116 02:36:55.199798  476192 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0116 02:36:55.199895  476192 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0116 02:36:55.205173  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:55.390903  476192 cri.go:89] found id: "3ad9676ac4a1b651ca9f614f61a4e788cd41ef1a58cb447881e85401e9e9065d"
	I0116 02:36:55.390932  476192 cri.go:89] found id: ""
	I0116 02:36:55.390943  476192 logs.go:284] 1 containers: [3ad9676ac4a1b651ca9f614f61a4e788cd41ef1a58cb447881e85401e9e9065d]
	I0116 02:36:55.391009  476192 ssh_runner.go:195] Run: which crictl
	I0116 02:36:55.403282  476192 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0116 02:36:55.403373  476192 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0116 02:36:55.505949  476192 cri.go:89] found id: ""
	I0116 02:36:55.505983  476192 logs.go:284] 0 containers: []
	W0116 02:36:55.505992  476192 logs.go:286] No container was found matching "kindnet"
	I0116 02:36:55.506003  476192 logs.go:123] Gathering logs for kube-apiserver [842fe1c35d9878a3736740b28b8dfddb4c69474338c89befd7e7bd1550543ff5] ...
	I0116 02:36:55.506022  476192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 842fe1c35d9878a3736740b28b8dfddb4c69474338c89befd7e7bd1550543ff5"
	I0116 02:36:55.578985  476192 logs.go:123] Gathering logs for kube-proxy [184c590322abeab5df31eaa66f8a2c1dae362cbb1a6a48269c17dd14ce24bb80] ...
	I0116 02:36:55.579034  476192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 184c590322abeab5df31eaa66f8a2c1dae362cbb1a6a48269c17dd14ce24bb80"
	I0116 02:36:55.615721  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:55.617372  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:55.690569  476192 logs.go:123] Gathering logs for container status ...
	I0116 02:36:55.690615  476192 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0116 02:36:55.705073  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:55.843646  476192 logs.go:123] Gathering logs for dmesg ...
	I0116 02:36:55.843685  476192 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0116 02:36:55.889177  476192 logs.go:123] Gathering logs for describe nodes ...
	I0116 02:36:55.889214  476192 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0116 02:36:56.046131  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:56.116949  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:56.187099  476192 logs.go:123] Gathering logs for etcd [ae1a246302684304eb4b0325a78d21d75927c673f5bf6dab42f5f6a7e13a1724] ...
	I0116 02:36:56.187135  476192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae1a246302684304eb4b0325a78d21d75927c673f5bf6dab42f5f6a7e13a1724"
	I0116 02:36:56.204059  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:56.305198  476192 logs.go:123] Gathering logs for coredns [6f7f3c925be8f812052f6b159e0958056ba30f6cc1a7b57d33b9d33f4f537bab] ...
	I0116 02:36:56.305245  476192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f7f3c925be8f812052f6b159e0958056ba30f6cc1a7b57d33b9d33f4f537bab"
	I0116 02:36:56.378410  476192 logs.go:123] Gathering logs for kube-scheduler [2018f9809e3cb61e90992f22bec6870c1dd349160485490c8f8b562369ccd92b] ...
	I0116 02:36:56.378440  476192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2018f9809e3cb61e90992f22bec6870c1dd349160485490c8f8b562369ccd92b"
	I0116 02:36:56.441134  476192 logs.go:123] Gathering logs for kube-controller-manager [3ad9676ac4a1b651ca9f614f61a4e788cd41ef1a58cb447881e85401e9e9065d] ...
	I0116 02:36:56.441171  476192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ad9676ac4a1b651ca9f614f61a4e788cd41ef1a58cb447881e85401e9e9065d"
	I0116 02:36:56.512509  476192 logs.go:123] Gathering logs for CRI-O ...
	I0116 02:36:56.512554  476192 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0116 02:36:56.544351  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:56.600940  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:56.707372  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:56.957487  476192 logs.go:123] Gathering logs for kubelet ...
	I0116 02:36:56.957535  476192 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0116 02:36:57.024130  476192 logs.go:138] Found kubelet problem: Jan 16 02:35:33 addons-690916 kubelet[1250]: W0116 02:35:33.555525    1250 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-690916" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-690916' and this object
	W0116 02:36:57.024297  476192 logs.go:138] Found kubelet problem: Jan 16 02:35:33 addons-690916 kubelet[1250]: E0116 02:35:33.555577    1250 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-690916" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-690916' and this object
	W0116 02:36:57.040812  476192 logs.go:138] Found kubelet problem: Jan 16 02:35:45 addons-690916 kubelet[1250]: W0116 02:35:45.700056    1250 reflector.go:535] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-690916" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-690916' and this object
	W0116 02:36:57.040974  476192 logs.go:138] Found kubelet problem: Jan 16 02:35:45 addons-690916 kubelet[1250]: E0116 02:35:45.700091    1250 reflector.go:147] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-690916" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-690916' and this object
	I0116 02:36:57.042850  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:57.061034  476192 out.go:309] Setting ErrFile to fd 2...
	I0116 02:36:57.061073  476192 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0116 02:36:57.061156  476192 out.go:239] X Problems detected in kubelet:
	W0116 02:36:57.061164  476192 out.go:239]   Jan 16 02:35:33 addons-690916 kubelet[1250]: W0116 02:35:33.555525    1250 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-690916" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-690916' and this object
	W0116 02:36:57.061174  476192 out.go:239]   Jan 16 02:35:33 addons-690916 kubelet[1250]: E0116 02:35:33.555577    1250 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-690916" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-690916' and this object
	W0116 02:36:57.061186  476192 out.go:239]   Jan 16 02:35:45 addons-690916 kubelet[1250]: W0116 02:35:45.700056    1250 reflector.go:535] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-690916" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-690916' and this object
	W0116 02:36:57.061192  476192 out.go:239]   Jan 16 02:35:45 addons-690916 kubelet[1250]: E0116 02:35:45.700091    1250 reflector.go:147] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-690916" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-690916' and this object
	I0116 02:36:57.061198  476192 out.go:309] Setting ErrFile to fd 2...
	I0116 02:36:57.061207  476192 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:36:57.101410  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:57.205384  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:57.550993  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:57.605751  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:57.708437  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:58.049512  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:58.101406  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:58.205153  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:58.544641  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:58.603420  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:58.704735  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:59.189520  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:59.192802  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:59.209100  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:36:59.543435  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:36:59.601837  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:36:59.706947  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:37:00.059587  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:37:00.101336  476192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 02:37:00.209537  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:37:00.561905  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:37:00.602815  476192 kapi.go:107] duration metric: took 1m18.507018603s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0116 02:37:00.704221  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:37:01.045169  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:37:01.209476  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:37:01.545565  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:37:01.704947  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:37:02.053843  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:37:02.206037  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:37:02.544752  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:37:02.705028  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:37:03.047695  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:37:03.209300  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 02:37:03.566566  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:37:03.704704  476192 kapi.go:107] duration metric: took 1m18.004559789s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0116 02:37:03.706792  476192 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-690916 cluster.
	I0116 02:37:03.708379  476192 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0116 02:37:03.709818  476192 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0116 02:37:04.043942  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:37:04.543842  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:37:05.043976  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:37:05.550054  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:37:06.048370  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:37:06.545315  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:37:07.045011  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:37:07.062336  476192 api_server.go:253] Checking apiserver healthz at https://192.168.39.234:8443/healthz ...
	I0116 02:37:07.069146  476192 api_server.go:279] https://192.168.39.234:8443/healthz returned 200:
	ok
	I0116 02:37:07.071893  476192 api_server.go:141] control plane version: v1.28.4
	I0116 02:37:07.071918  476192 api_server.go:131] duration metric: took 12.893600638s to wait for apiserver health ...
	I0116 02:37:07.071927  476192 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 02:37:07.071959  476192 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0116 02:37:07.072022  476192 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0116 02:37:07.179382  476192 cri.go:89] found id: "842fe1c35d9878a3736740b28b8dfddb4c69474338c89befd7e7bd1550543ff5"
	I0116 02:37:07.179405  476192 cri.go:89] found id: ""
	I0116 02:37:07.179415  476192 logs.go:284] 1 containers: [842fe1c35d9878a3736740b28b8dfddb4c69474338c89befd7e7bd1550543ff5]
	I0116 02:37:07.179489  476192 ssh_runner.go:195] Run: which crictl
	I0116 02:37:07.194709  476192 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0116 02:37:07.194797  476192 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0116 02:37:07.259980  476192 cri.go:89] found id: "ae1a246302684304eb4b0325a78d21d75927c673f5bf6dab42f5f6a7e13a1724"
	I0116 02:37:07.260010  476192 cri.go:89] found id: ""
	I0116 02:37:07.260021  476192 logs.go:284] 1 containers: [ae1a246302684304eb4b0325a78d21d75927c673f5bf6dab42f5f6a7e13a1724]
	I0116 02:37:07.260091  476192 ssh_runner.go:195] Run: which crictl
	I0116 02:37:07.268920  476192 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0116 02:37:07.269014  476192 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0116 02:37:07.335996  476192 cri.go:89] found id: "6f7f3c925be8f812052f6b159e0958056ba30f6cc1a7b57d33b9d33f4f537bab"
	I0116 02:37:07.336027  476192 cri.go:89] found id: ""
	I0116 02:37:07.336049  476192 logs.go:284] 1 containers: [6f7f3c925be8f812052f6b159e0958056ba30f6cc1a7b57d33b9d33f4f537bab]
	I0116 02:37:07.336114  476192 ssh_runner.go:195] Run: which crictl
	I0116 02:37:07.343751  476192 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0116 02:37:07.343830  476192 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0116 02:37:07.446553  476192 cri.go:89] found id: "2018f9809e3cb61e90992f22bec6870c1dd349160485490c8f8b562369ccd92b"
	I0116 02:37:07.446583  476192 cri.go:89] found id: ""
	I0116 02:37:07.446594  476192 logs.go:284] 1 containers: [2018f9809e3cb61e90992f22bec6870c1dd349160485490c8f8b562369ccd92b]
	I0116 02:37:07.446664  476192 ssh_runner.go:195] Run: which crictl
	I0116 02:37:07.458526  476192 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0116 02:37:07.458682  476192 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0116 02:37:07.514765  476192 cri.go:89] found id: "184c590322abeab5df31eaa66f8a2c1dae362cbb1a6a48269c17dd14ce24bb80"
	I0116 02:37:07.514796  476192 cri.go:89] found id: ""
	I0116 02:37:07.514808  476192 logs.go:284] 1 containers: [184c590322abeab5df31eaa66f8a2c1dae362cbb1a6a48269c17dd14ce24bb80]
	I0116 02:37:07.514875  476192 ssh_runner.go:195] Run: which crictl
	I0116 02:37:07.521118  476192 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0116 02:37:07.521219  476192 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0116 02:37:07.546461  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:37:07.607886  476192 cri.go:89] found id: "3ad9676ac4a1b651ca9f614f61a4e788cd41ef1a58cb447881e85401e9e9065d"
	I0116 02:37:07.607917  476192 cri.go:89] found id: ""
	I0116 02:37:07.607927  476192 logs.go:284] 1 containers: [3ad9676ac4a1b651ca9f614f61a4e788cd41ef1a58cb447881e85401e9e9065d]
	I0116 02:37:07.608017  476192 ssh_runner.go:195] Run: which crictl
	I0116 02:37:07.618936  476192 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0116 02:37:07.619036  476192 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0116 02:37:07.662817  476192 cri.go:89] found id: ""
	I0116 02:37:07.662852  476192 logs.go:284] 0 containers: []
	W0116 02:37:07.662862  476192 logs.go:286] No container was found matching "kindnet"
	I0116 02:37:07.662879  476192 logs.go:123] Gathering logs for kube-controller-manager [3ad9676ac4a1b651ca9f614f61a4e788cd41ef1a58cb447881e85401e9e9065d] ...
	I0116 02:37:07.662898  476192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ad9676ac4a1b651ca9f614f61a4e788cd41ef1a58cb447881e85401e9e9065d"
	I0116 02:37:07.758588  476192 logs.go:123] Gathering logs for kubelet ...
	I0116 02:37:07.758634  476192 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0116 02:37:07.802559  476192 logs.go:138] Found kubelet problem: Jan 16 02:35:33 addons-690916 kubelet[1250]: W0116 02:35:33.555525    1250 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-690916" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-690916' and this object
	W0116 02:37:07.802758  476192 logs.go:138] Found kubelet problem: Jan 16 02:35:33 addons-690916 kubelet[1250]: E0116 02:35:33.555577    1250 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-690916" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-690916' and this object
	W0116 02:37:07.819522  476192 logs.go:138] Found kubelet problem: Jan 16 02:35:45 addons-690916 kubelet[1250]: W0116 02:35:45.700056    1250 reflector.go:535] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-690916" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-690916' and this object
	W0116 02:37:07.819722  476192 logs.go:138] Found kubelet problem: Jan 16 02:35:45 addons-690916 kubelet[1250]: E0116 02:35:45.700091    1250 reflector.go:147] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-690916" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-690916' and this object
	I0116 02:37:07.839921  476192 logs.go:123] Gathering logs for dmesg ...
	I0116 02:37:07.839957  476192 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0116 02:37:07.856082  476192 logs.go:123] Gathering logs for describe nodes ...
	I0116 02:37:07.856118  476192 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0116 02:37:08.055534  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:37:08.132688  476192 logs.go:123] Gathering logs for kube-apiserver [842fe1c35d9878a3736740b28b8dfddb4c69474338c89befd7e7bd1550543ff5] ...
	I0116 02:37:08.132724  476192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 842fe1c35d9878a3736740b28b8dfddb4c69474338c89befd7e7bd1550543ff5"
	I0116 02:37:08.233441  476192 logs.go:123] Gathering logs for coredns [6f7f3c925be8f812052f6b159e0958056ba30f6cc1a7b57d33b9d33f4f537bab] ...
	I0116 02:37:08.233483  476192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f7f3c925be8f812052f6b159e0958056ba30f6cc1a7b57d33b9d33f4f537bab"
	I0116 02:37:08.307244  476192 logs.go:123] Gathering logs for kube-scheduler [2018f9809e3cb61e90992f22bec6870c1dd349160485490c8f8b562369ccd92b] ...
	I0116 02:37:08.307289  476192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2018f9809e3cb61e90992f22bec6870c1dd349160485490c8f8b562369ccd92b"
	I0116 02:37:08.366764  476192 logs.go:123] Gathering logs for kube-proxy [184c590322abeab5df31eaa66f8a2c1dae362cbb1a6a48269c17dd14ce24bb80] ...
	I0116 02:37:08.366800  476192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 184c590322abeab5df31eaa66f8a2c1dae362cbb1a6a48269c17dd14ce24bb80"
	I0116 02:37:08.427016  476192 logs.go:123] Gathering logs for container status ...
	I0116 02:37:08.427061  476192 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0116 02:37:08.505864  476192 logs.go:123] Gathering logs for etcd [ae1a246302684304eb4b0325a78d21d75927c673f5bf6dab42f5f6a7e13a1724] ...
	I0116 02:37:08.505908  476192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae1a246302684304eb4b0325a78d21d75927c673f5bf6dab42f5f6a7e13a1724"
	I0116 02:37:08.544875  476192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 02:37:08.588986  476192 logs.go:123] Gathering logs for CRI-O ...
	I0116 02:37:08.589029  476192 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0116 02:37:09.043887  476192 kapi.go:107] duration metric: took 1m26.00666962s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0116 02:37:09.046954  476192 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, storage-provisioner, storage-provisioner-rancher, inspektor-gadget, metrics-server, yakd, ingress-dns, helm-tiller, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0116 02:37:09.048683  476192 addons.go:505] enable addons completed in 1m37.344527527s: enabled=[nvidia-device-plugin cloud-spanner storage-provisioner storage-provisioner-rancher inspektor-gadget metrics-server yakd ingress-dns helm-tiller default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0116 02:37:09.132576  476192 out.go:309] Setting ErrFile to fd 2...
	I0116 02:37:09.132619  476192 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0116 02:37:09.132702  476192 out.go:239] X Problems detected in kubelet:
	W0116 02:37:09.132719  476192 out.go:239]   Jan 16 02:35:33 addons-690916 kubelet[1250]: W0116 02:35:33.555525    1250 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-690916" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-690916' and this object
	W0116 02:37:09.132734  476192 out.go:239]   Jan 16 02:35:33 addons-690916 kubelet[1250]: E0116 02:35:33.555577    1250 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-690916" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-690916' and this object
	W0116 02:37:09.132746  476192 out.go:239]   Jan 16 02:35:45 addons-690916 kubelet[1250]: W0116 02:35:45.700056    1250 reflector.go:535] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-690916" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-690916' and this object
	W0116 02:37:09.132759  476192 out.go:239]   Jan 16 02:35:45 addons-690916 kubelet[1250]: E0116 02:35:45.700091    1250 reflector.go:147] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-690916" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-690916' and this object
	I0116 02:37:09.132776  476192 out.go:309] Setting ErrFile to fd 2...
	I0116 02:37:09.132786  476192 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:37:19.146109  476192 system_pods.go:59] 18 kube-system pods found
	I0116 02:37:19.146141  476192 system_pods.go:61] "coredns-5dd5756b68-sx897" [3f050772-f775-4ee1-8b1e-7db7e2c83fb5] Running
	I0116 02:37:19.146146  476192 system_pods.go:61] "csi-hostpath-attacher-0" [43337e69-b615-4e07-80a9-98029d2345fc] Running
	I0116 02:37:19.146150  476192 system_pods.go:61] "csi-hostpath-resizer-0" [fec5c91e-27c7-46b9-8eb8-09d114509590] Running
	I0116 02:37:19.146154  476192 system_pods.go:61] "csi-hostpathplugin-8wqjr" [8943cbaa-9281-49b8-bad6-0eea46d0016c] Running
	I0116 02:37:19.146158  476192 system_pods.go:61] "etcd-addons-690916" [ce096995-f035-4e89-92b7-57a2dede9b7e] Running
	I0116 02:37:19.146162  476192 system_pods.go:61] "kube-apiserver-addons-690916" [894fe80e-cc60-4138-9871-2b960baa265b] Running
	I0116 02:37:19.146166  476192 system_pods.go:61] "kube-controller-manager-addons-690916" [feda9b89-6139-4e06-b4dc-ccb9a0641aa7] Running
	I0116 02:37:19.146171  476192 system_pods.go:61] "kube-ingress-dns-minikube" [f9b4803f-094e-4494-8a75-581074b26c99] Running
	I0116 02:37:19.146174  476192 system_pods.go:61] "kube-proxy-xmxx2" [4f4aecbb-0f00-4675-af12-a390c45121da] Running
	I0116 02:37:19.146178  476192 system_pods.go:61] "kube-scheduler-addons-690916" [0cc0d990-54b4-4270-83da-52373e9ec0f4] Running
	I0116 02:37:19.146182  476192 system_pods.go:61] "metrics-server-7c66d45ddc-9nqgd" [cc5a4e79-918b-4fdc-934b-8f301e03f744] Running
	I0116 02:37:19.146186  476192 system_pods.go:61] "nvidia-device-plugin-daemonset-p8gsf" [3df66016-9e6d-4756-b004-b80a4bab9fad] Running
	I0116 02:37:19.146190  476192 system_pods.go:61] "registry-lc5z7" [1f13401c-40a4-41b7-978b-4946e00babb5] Running
	I0116 02:37:19.146194  476192 system_pods.go:61] "registry-proxy-9db6c" [0e95c7bb-1c5d-4a03-9ca4-1c48c5270c4d] Running
	I0116 02:37:19.146201  476192 system_pods.go:61] "snapshot-controller-58dbcc7b99-c2wbc" [73cd3fbc-d4a3-4b3a-b32e-81966b930798] Running
	I0116 02:37:19.146205  476192 system_pods.go:61] "snapshot-controller-58dbcc7b99-ctbjm" [fbb35ebd-f264-4345-8d33-7fe8ebd8551c] Running
	I0116 02:37:19.146209  476192 system_pods.go:61] "storage-provisioner" [229f9497-4d92-4b62-8c3c-0010caf4a418] Running
	I0116 02:37:19.146212  476192 system_pods.go:61] "tiller-deploy-7b677967b9-8gmxx" [32fe0688-1256-49e9-a768-99e587db34c8] Running
	I0116 02:37:19.146218  476192 system_pods.go:74] duration metric: took 12.07428606s to wait for pod list to return data ...
	I0116 02:37:19.146233  476192 default_sa.go:34] waiting for default service account to be created ...
	I0116 02:37:19.148845  476192 default_sa.go:45] found service account: "default"
	I0116 02:37:19.148873  476192 default_sa.go:55] duration metric: took 2.633915ms for default service account to be created ...
	I0116 02:37:19.148882  476192 system_pods.go:116] waiting for k8s-apps to be running ...
	I0116 02:37:19.159873  476192 system_pods.go:86] 18 kube-system pods found
	I0116 02:37:19.159904  476192 system_pods.go:89] "coredns-5dd5756b68-sx897" [3f050772-f775-4ee1-8b1e-7db7e2c83fb5] Running
	I0116 02:37:19.159910  476192 system_pods.go:89] "csi-hostpath-attacher-0" [43337e69-b615-4e07-80a9-98029d2345fc] Running
	I0116 02:37:19.159914  476192 system_pods.go:89] "csi-hostpath-resizer-0" [fec5c91e-27c7-46b9-8eb8-09d114509590] Running
	I0116 02:37:19.159919  476192 system_pods.go:89] "csi-hostpathplugin-8wqjr" [8943cbaa-9281-49b8-bad6-0eea46d0016c] Running
	I0116 02:37:19.159923  476192 system_pods.go:89] "etcd-addons-690916" [ce096995-f035-4e89-92b7-57a2dede9b7e] Running
	I0116 02:37:19.159927  476192 system_pods.go:89] "kube-apiserver-addons-690916" [894fe80e-cc60-4138-9871-2b960baa265b] Running
	I0116 02:37:19.159931  476192 system_pods.go:89] "kube-controller-manager-addons-690916" [feda9b89-6139-4e06-b4dc-ccb9a0641aa7] Running
	I0116 02:37:19.159935  476192 system_pods.go:89] "kube-ingress-dns-minikube" [f9b4803f-094e-4494-8a75-581074b26c99] Running
	I0116 02:37:19.159940  476192 system_pods.go:89] "kube-proxy-xmxx2" [4f4aecbb-0f00-4675-af12-a390c45121da] Running
	I0116 02:37:19.159944  476192 system_pods.go:89] "kube-scheduler-addons-690916" [0cc0d990-54b4-4270-83da-52373e9ec0f4] Running
	I0116 02:37:19.159948  476192 system_pods.go:89] "metrics-server-7c66d45ddc-9nqgd" [cc5a4e79-918b-4fdc-934b-8f301e03f744] Running
	I0116 02:37:19.159952  476192 system_pods.go:89] "nvidia-device-plugin-daemonset-p8gsf" [3df66016-9e6d-4756-b004-b80a4bab9fad] Running
	I0116 02:37:19.159956  476192 system_pods.go:89] "registry-lc5z7" [1f13401c-40a4-41b7-978b-4946e00babb5] Running
	I0116 02:37:19.159960  476192 system_pods.go:89] "registry-proxy-9db6c" [0e95c7bb-1c5d-4a03-9ca4-1c48c5270c4d] Running
	I0116 02:37:19.159964  476192 system_pods.go:89] "snapshot-controller-58dbcc7b99-c2wbc" [73cd3fbc-d4a3-4b3a-b32e-81966b930798] Running
	I0116 02:37:19.159967  476192 system_pods.go:89] "snapshot-controller-58dbcc7b99-ctbjm" [fbb35ebd-f264-4345-8d33-7fe8ebd8551c] Running
	I0116 02:37:19.159971  476192 system_pods.go:89] "storage-provisioner" [229f9497-4d92-4b62-8c3c-0010caf4a418] Running
	I0116 02:37:19.159974  476192 system_pods.go:89] "tiller-deploy-7b677967b9-8gmxx" [32fe0688-1256-49e9-a768-99e587db34c8] Running
	I0116 02:37:19.159981  476192 system_pods.go:126] duration metric: took 11.092715ms to wait for k8s-apps to be running ...
	I0116 02:37:19.159989  476192 system_svc.go:44] waiting for kubelet service to be running ....
	I0116 02:37:19.160060  476192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 02:37:19.173867  476192 system_svc.go:56] duration metric: took 13.867632ms WaitForService to wait for kubelet.
	I0116 02:37:19.173894  476192 kubeadm.go:581] duration metric: took 1m46.771430912s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0116 02:37:19.173913  476192 node_conditions.go:102] verifying NodePressure condition ...
	I0116 02:37:19.177672  476192 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 02:37:19.177699  476192 node_conditions.go:123] node cpu capacity is 2
	I0116 02:37:19.177714  476192 node_conditions.go:105] duration metric: took 3.793904ms to run NodePressure ...
	I0116 02:37:19.177726  476192 start.go:228] waiting for startup goroutines ...
	I0116 02:37:19.177732  476192 start.go:233] waiting for cluster config update ...
	I0116 02:37:19.177746  476192 start.go:242] writing updated cluster config ...
	I0116 02:37:19.178021  476192 ssh_runner.go:195] Run: rm -f paused
	I0116 02:37:19.230366  476192 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0116 02:37:19.232863  476192 out.go:177] * Done! kubectl is now configured to use "addons-690916" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Tue 2024-01-16 02:34:47 UTC, ends at Tue 2024-01-16 02:40:18 UTC. --
	Jan 16 02:40:18 addons-690916 crio[717]: time="2024-01-16 02:40:18.073370706Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705372818073354081,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575388,},InodesUsed:&UInt64Value{Value:233,},},},}" file="go-grpc-middleware/chain.go:25" id=0a30e217-b69b-4f38-ad1d-be80caad3e24 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 02:40:18 addons-690916 crio[717]: time="2024-01-16 02:40:18.074291973Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=8f258249-150c-42f3-b653-614b61fd29c4 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 02:40:18 addons-690916 crio[717]: time="2024-01-16 02:40:18.074345705Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=8f258249-150c-42f3-b653-614b61fd29c4 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 02:40:18 addons-690916 crio[717]: time="2024-01-16 02:40:18.074721105Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f34675bcfb69e4ce55b8076891d5695522ea67c3dca248d0cf18a574600c73a4,PodSandboxId:fd6df6ae76a51ce056eb95b9db8797322df4fb1f2b426f7647127c781935970f,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1705372809924826277,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-j29kv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d4710159-b7b2-4773-aa2d-e9085b2ebf12,},Annotations:map[string]string{io.kubernetes.container.hash: 74d8c879,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:897d271fb9c2106feac7b23ef1defceb9c7a2f8311477e3d664063be779d4444,PodSandboxId:b88285b9921d74320138c1f094d34e8faaaf00d16e415b35321235a241d308d1,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,State:CONTAINER_RUNNING,CreatedAt:1705372690128589164,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7ddfbb94ff-d8hns,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 2821060b-5918-451a-a1f1-1be30e4dc855,},An
notations:map[string]string{io.kubernetes.container.hash: af51c193,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:127a3674d6905f9573943cdf4a675504b24f64d399eedb6b8bf1b2d316f79ab2,PodSandboxId:c33ff6bb556c4525e55a63cb534de3412e01b61b75ecf3b68d985605200b5460,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,State:CONTAINER_RUNNING,CreatedAt:1705372670014843645,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: ad3129e2-2f54-4da9-9249-7a6219249f7b,},Annotations:map[string]string{io.kubernetes.container.hash: f73320f9,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89607d06f0194d0e9efb048623d649f868077215ed0c3918be9a2aed11568fb9,PodSandboxId:e3a53adb6d2d8a7b84a482582838740ff25d16f2230fd986fe88769f329fd7b3,Metadata:&ContainerMetadata{Name:patch,Attempt:3,},Image:&ImageSpec{Image:1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1705372631713445163,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: in
gress-nginx-admission-patch-xfgtj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0d6efebd-a4d9-4f7c-973f-a8a6dd365451,},Annotations:map[string]string{io.kubernetes.container.hash: 8f3fa968,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:133d4cee334004618d510c0d82149f879f9ccd1275d540e4c68844f838d16820,PodSandboxId:671d083b2e08245f28c380c66fcd25f5849c60f93c428ea9a42c58423115689b,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1705372623227383531,Labels:map[string]string{io.kubernetes.container.name:
gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-dgnrl,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 002d4fea-92dd-4d9a-b165-26ca685eacc2,},Annotations:map[string]string{io.kubernetes.container.hash: 59dbc75b,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8281a718fc1d2bac136d45b7551b9b0104e9827513aae6dc0c23c9d6044c4d0,PodSandboxId:92a110943caafdd21513773bd1ea7014a71b063c244b9ed60da8dec81e3c2d7f,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c59
65b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1705372601618469827,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-fhp4x,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3c52a4d4-efa0-492b-b36c-e6ed01377234,},Annotations:map[string]string{io.kubernetes.container.hash: 70f997d5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6c6ec0b0a36392e0cae46d27911d6ed0b0a09f7332d1b4bd9bfe04efc75108f,PodSandboxId:9228b7ae80ab9c5e4c84c6c1cc189520f515e6431f8ef2833f59c1b21e9423d3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c44
1c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705372555603754738,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 229f9497-4d92-4b62-8c3c-0010caf4a418,},Annotations:map[string]string{io.kubernetes.container.hash: 7eeb28f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02908854f641e139d91042021648626681a433cc41f649de3ca264c847c56efa,PodSandboxId:4cb056c2e1d514612099291ab28347c6f41c6d8660d831355d546e16aa15c2e8,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},},ImageRef:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e
727bccc0e39f9329310,State:CONTAINER_RUNNING,CreatedAt:1705372553537789247,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-7d5hp,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 751a0dad-e5ce-44e0-888c-bf7e74f9e70e,},Annotations:map[string]string{io.kubernetes.container.hash: 3a103379,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:184c590322abeab5df31eaa66f8a2c1dae362cbb1a6a48269c17dd14ce24bb80,PodSandboxId:99c2aa13adbcb9b540a5e07debfe23199dbff3fda884e3cb17b9051e9b730553,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:regist
ry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705372536611227131,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xmxx2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f4aecbb-0f00-4675-af12-a390c45121da,},Annotations:map[string]string{io.kubernetes.container.hash: 64e553ed,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f7f3c925be8f812052f6b159e0958056ba30f6cc1a7b57d33b9d33f4f537bab,PodSandboxId:aeccc8d10670f70c2887484be37f4bd5f42599bd50c6e35d5dacd0c2ff0b2d5f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead
06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705372540648848184,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-sx897,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f050772-f775-4ee1-8b1e-7db7e2c83fb5,},Annotations:map[string]string{io.kubernetes.container.hash: ec971d3f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2018f9809e3cb61e90992f22bec6870c1dd349160485490c8f8b562369ccd92b,PodSandboxId:4379de7428d6effd872f6687a2c34d72abb128a6d527a25617b5874e31ce066f,Metadata:&ContainerMetadata{Na
me:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705372512299713021,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-690916,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e15ab68460df4a8f909e15fc9444f5d,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae1a246302684304eb4b0325a78d21d75927c673f5bf6dab42f5f6a7e13a1724,PodSandboxId:07b5500785688484bd134edf62a126129c2e223bbd907666bc6dfa098040afcf,Metadata:&ContainerMetadata{Name:etcd,Attempt
:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705372512033043449,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-690916,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57d725b3b32a4da55c4be812e89b7538,},Annotations:map[string]string{io.kubernetes.container.hash: 8bc03e9f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:842fe1c35d9878a3736740b28b8dfddb4c69474338c89befd7e7bd1550543ff5,PodSandboxId:38d3271e610a53aab3b98cb972e39049b17c0424e608d2067162209024c75cec,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db334647
25e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705372511895260337,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-690916,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d4a91283a80cf232fb877d6172eb0c,},Annotations:map[string]string{io.kubernetes.container.hash: a61142ae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ad9676ac4a1b651ca9f614f61a4e788cd41ef1a58cb447881e85401e9e9065d,PodSandboxId:bf4e57c68e4b9daba89b32f5ac057ef5df8582177bb49ebf846b02617af9df13,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7
188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705372511760096258,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-690916,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a065c91fb4801a64cc2cf7907c77ca2,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=8f258249-150c-42f3-b653-614b61fd29c4 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 02:40:18 addons-690916 crio[717]: time="2024-01-16 02:40:18.115726591Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=a1de996c-2ad1-412a-b2c9-56d43802f868 name=/runtime.v1.RuntimeService/Version
	Jan 16 02:40:18 addons-690916 crio[717]: time="2024-01-16 02:40:18.115810409Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=a1de996c-2ad1-412a-b2c9-56d43802f868 name=/runtime.v1.RuntimeService/Version
	Jan 16 02:40:18 addons-690916 crio[717]: time="2024-01-16 02:40:18.117443977Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=71eae4e0-0fd5-418a-a9cb-a51b10fb08e0 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 02:40:18 addons-690916 crio[717]: time="2024-01-16 02:40:18.118751870Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705372818118734640,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575388,},InodesUsed:&UInt64Value{Value:233,},},},}" file="go-grpc-middleware/chain.go:25" id=71eae4e0-0fd5-418a-a9cb-a51b10fb08e0 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 02:40:18 addons-690916 crio[717]: time="2024-01-16 02:40:18.119534978Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=bb955226-4a6d-4f95-bee6-9529321dc094 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 02:40:18 addons-690916 crio[717]: time="2024-01-16 02:40:18.119582211Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=bb955226-4a6d-4f95-bee6-9529321dc094 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 02:40:18 addons-690916 crio[717]: time="2024-01-16 02:40:18.119945555Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f34675bcfb69e4ce55b8076891d5695522ea67c3dca248d0cf18a574600c73a4,PodSandboxId:fd6df6ae76a51ce056eb95b9db8797322df4fb1f2b426f7647127c781935970f,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1705372809924826277,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-j29kv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d4710159-b7b2-4773-aa2d-e9085b2ebf12,},Annotations:map[string]string{io.kubernetes.container.hash: 74d8c879,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:897d271fb9c2106feac7b23ef1defceb9c7a2f8311477e3d664063be779d4444,PodSandboxId:b88285b9921d74320138c1f094d34e8faaaf00d16e415b35321235a241d308d1,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,State:CONTAINER_RUNNING,CreatedAt:1705372690128589164,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7ddfbb94ff-d8hns,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 2821060b-5918-451a-a1f1-1be30e4dc855,},An
notations:map[string]string{io.kubernetes.container.hash: af51c193,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:127a3674d6905f9573943cdf4a675504b24f64d399eedb6b8bf1b2d316f79ab2,PodSandboxId:c33ff6bb556c4525e55a63cb534de3412e01b61b75ecf3b68d985605200b5460,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,State:CONTAINER_RUNNING,CreatedAt:1705372670014843645,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: ad3129e2-2f54-4da9-9249-7a6219249f7b,},Annotations:map[string]string{io.kubernetes.container.hash: f73320f9,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89607d06f0194d0e9efb048623d649f868077215ed0c3918be9a2aed11568fb9,PodSandboxId:e3a53adb6d2d8a7b84a482582838740ff25d16f2230fd986fe88769f329fd7b3,Metadata:&ContainerMetadata{Name:patch,Attempt:3,},Image:&ImageSpec{Image:1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1705372631713445163,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: in
gress-nginx-admission-patch-xfgtj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0d6efebd-a4d9-4f7c-973f-a8a6dd365451,},Annotations:map[string]string{io.kubernetes.container.hash: 8f3fa968,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:133d4cee334004618d510c0d82149f879f9ccd1275d540e4c68844f838d16820,PodSandboxId:671d083b2e08245f28c380c66fcd25f5849c60f93c428ea9a42c58423115689b,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1705372623227383531,Labels:map[string]string{io.kubernetes.container.name:
gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-dgnrl,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 002d4fea-92dd-4d9a-b165-26ca685eacc2,},Annotations:map[string]string{io.kubernetes.container.hash: 59dbc75b,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8281a718fc1d2bac136d45b7551b9b0104e9827513aae6dc0c23c9d6044c4d0,PodSandboxId:92a110943caafdd21513773bd1ea7014a71b063c244b9ed60da8dec81e3c2d7f,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c59
65b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1705372601618469827,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-fhp4x,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3c52a4d4-efa0-492b-b36c-e6ed01377234,},Annotations:map[string]string{io.kubernetes.container.hash: 70f997d5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6c6ec0b0a36392e0cae46d27911d6ed0b0a09f7332d1b4bd9bfe04efc75108f,PodSandboxId:9228b7ae80ab9c5e4c84c6c1cc189520f515e6431f8ef2833f59c1b21e9423d3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c44
1c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705372555603754738,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 229f9497-4d92-4b62-8c3c-0010caf4a418,},Annotations:map[string]string{io.kubernetes.container.hash: 7eeb28f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02908854f641e139d91042021648626681a433cc41f649de3ca264c847c56efa,PodSandboxId:4cb056c2e1d514612099291ab28347c6f41c6d8660d831355d546e16aa15c2e8,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},},ImageRef:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e
727bccc0e39f9329310,State:CONTAINER_RUNNING,CreatedAt:1705372553537789247,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-7d5hp,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 751a0dad-e5ce-44e0-888c-bf7e74f9e70e,},Annotations:map[string]string{io.kubernetes.container.hash: 3a103379,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:184c590322abeab5df31eaa66f8a2c1dae362cbb1a6a48269c17dd14ce24bb80,PodSandboxId:99c2aa13adbcb9b540a5e07debfe23199dbff3fda884e3cb17b9051e9b730553,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:regist
ry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705372536611227131,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xmxx2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f4aecbb-0f00-4675-af12-a390c45121da,},Annotations:map[string]string{io.kubernetes.container.hash: 64e553ed,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f7f3c925be8f812052f6b159e0958056ba30f6cc1a7b57d33b9d33f4f537bab,PodSandboxId:aeccc8d10670f70c2887484be37f4bd5f42599bd50c6e35d5dacd0c2ff0b2d5f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead
06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705372540648848184,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-sx897,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f050772-f775-4ee1-8b1e-7db7e2c83fb5,},Annotations:map[string]string{io.kubernetes.container.hash: ec971d3f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2018f9809e3cb61e90992f22bec6870c1dd349160485490c8f8b562369ccd92b,PodSandboxId:4379de7428d6effd872f6687a2c34d72abb128a6d527a25617b5874e31ce066f,Metadata:&ContainerMetadata{Na
me:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705372512299713021,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-690916,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e15ab68460df4a8f909e15fc9444f5d,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae1a246302684304eb4b0325a78d21d75927c673f5bf6dab42f5f6a7e13a1724,PodSandboxId:07b5500785688484bd134edf62a126129c2e223bbd907666bc6dfa098040afcf,Metadata:&ContainerMetadata{Name:etcd,Attempt
:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705372512033043449,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-690916,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57d725b3b32a4da55c4be812e89b7538,},Annotations:map[string]string{io.kubernetes.container.hash: 8bc03e9f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:842fe1c35d9878a3736740b28b8dfddb4c69474338c89befd7e7bd1550543ff5,PodSandboxId:38d3271e610a53aab3b98cb972e39049b17c0424e608d2067162209024c75cec,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db334647
25e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705372511895260337,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-690916,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d4a91283a80cf232fb877d6172eb0c,},Annotations:map[string]string{io.kubernetes.container.hash: a61142ae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ad9676ac4a1b651ca9f614f61a4e788cd41ef1a58cb447881e85401e9e9065d,PodSandboxId:bf4e57c68e4b9daba89b32f5ac057ef5df8582177bb49ebf846b02617af9df13,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7
188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705372511760096258,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-690916,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a065c91fb4801a64cc2cf7907c77ca2,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=bb955226-4a6d-4f95-bee6-9529321dc094 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 02:40:18 addons-690916 crio[717]: time="2024-01-16 02:40:18.159561361Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=16e56f44-2c64-435d-aa53-ae51793920ca name=/runtime.v1.RuntimeService/Version
	Jan 16 02:40:18 addons-690916 crio[717]: time="2024-01-16 02:40:18.159623663Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=16e56f44-2c64-435d-aa53-ae51793920ca name=/runtime.v1.RuntimeService/Version
	Jan 16 02:40:18 addons-690916 crio[717]: time="2024-01-16 02:40:18.161490966Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=e57ee714-969f-425a-9b4a-8cbf3926544b name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 02:40:18 addons-690916 crio[717]: time="2024-01-16 02:40:18.162786763Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705372818162765653,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575388,},InodesUsed:&UInt64Value{Value:233,},},},}" file="go-grpc-middleware/chain.go:25" id=e57ee714-969f-425a-9b4a-8cbf3926544b name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 02:40:18 addons-690916 crio[717]: time="2024-01-16 02:40:18.163844006Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=fbfd3166-0648-41e6-a26a-e1d119e79d16 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 02:40:18 addons-690916 crio[717]: time="2024-01-16 02:40:18.163924321Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=fbfd3166-0648-41e6-a26a-e1d119e79d16 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 02:40:18 addons-690916 crio[717]: time="2024-01-16 02:40:18.167154102Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f34675bcfb69e4ce55b8076891d5695522ea67c3dca248d0cf18a574600c73a4,PodSandboxId:fd6df6ae76a51ce056eb95b9db8797322df4fb1f2b426f7647127c781935970f,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1705372809924826277,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-j29kv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d4710159-b7b2-4773-aa2d-e9085b2ebf12,},Annotations:map[string]string{io.kubernetes.container.hash: 74d8c879,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:897d271fb9c2106feac7b23ef1defceb9c7a2f8311477e3d664063be779d4444,PodSandboxId:b88285b9921d74320138c1f094d34e8faaaf00d16e415b35321235a241d308d1,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,State:CONTAINER_RUNNING,CreatedAt:1705372690128589164,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7ddfbb94ff-d8hns,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 2821060b-5918-451a-a1f1-1be30e4dc855,},An
notations:map[string]string{io.kubernetes.container.hash: af51c193,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:127a3674d6905f9573943cdf4a675504b24f64d399eedb6b8bf1b2d316f79ab2,PodSandboxId:c33ff6bb556c4525e55a63cb534de3412e01b61b75ecf3b68d985605200b5460,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,State:CONTAINER_RUNNING,CreatedAt:1705372670014843645,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: ad3129e2-2f54-4da9-9249-7a6219249f7b,},Annotations:map[string]string{io.kubernetes.container.hash: f73320f9,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89607d06f0194d0e9efb048623d649f868077215ed0c3918be9a2aed11568fb9,PodSandboxId:e3a53adb6d2d8a7b84a482582838740ff25d16f2230fd986fe88769f329fd7b3,Metadata:&ContainerMetadata{Name:patch,Attempt:3,},Image:&ImageSpec{Image:1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1705372631713445163,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: in
gress-nginx-admission-patch-xfgtj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0d6efebd-a4d9-4f7c-973f-a8a6dd365451,},Annotations:map[string]string{io.kubernetes.container.hash: 8f3fa968,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:133d4cee334004618d510c0d82149f879f9ccd1275d540e4c68844f838d16820,PodSandboxId:671d083b2e08245f28c380c66fcd25f5849c60f93c428ea9a42c58423115689b,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1705372623227383531,Labels:map[string]string{io.kubernetes.container.name:
gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-dgnrl,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 002d4fea-92dd-4d9a-b165-26ca685eacc2,},Annotations:map[string]string{io.kubernetes.container.hash: 59dbc75b,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8281a718fc1d2bac136d45b7551b9b0104e9827513aae6dc0c23c9d6044c4d0,PodSandboxId:92a110943caafdd21513773bd1ea7014a71b063c244b9ed60da8dec81e3c2d7f,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c59
65b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1705372601618469827,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-fhp4x,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3c52a4d4-efa0-492b-b36c-e6ed01377234,},Annotations:map[string]string{io.kubernetes.container.hash: 70f997d5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6c6ec0b0a36392e0cae46d27911d6ed0b0a09f7332d1b4bd9bfe04efc75108f,PodSandboxId:9228b7ae80ab9c5e4c84c6c1cc189520f515e6431f8ef2833f59c1b21e9423d3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c44
1c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705372555603754738,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 229f9497-4d92-4b62-8c3c-0010caf4a418,},Annotations:map[string]string{io.kubernetes.container.hash: 7eeb28f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02908854f641e139d91042021648626681a433cc41f649de3ca264c847c56efa,PodSandboxId:4cb056c2e1d514612099291ab28347c6f41c6d8660d831355d546e16aa15c2e8,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},},ImageRef:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e
727bccc0e39f9329310,State:CONTAINER_RUNNING,CreatedAt:1705372553537789247,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-7d5hp,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 751a0dad-e5ce-44e0-888c-bf7e74f9e70e,},Annotations:map[string]string{io.kubernetes.container.hash: 3a103379,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:184c590322abeab5df31eaa66f8a2c1dae362cbb1a6a48269c17dd14ce24bb80,PodSandboxId:99c2aa13adbcb9b540a5e07debfe23199dbff3fda884e3cb17b9051e9b730553,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:regist
ry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705372536611227131,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xmxx2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f4aecbb-0f00-4675-af12-a390c45121da,},Annotations:map[string]string{io.kubernetes.container.hash: 64e553ed,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f7f3c925be8f812052f6b159e0958056ba30f6cc1a7b57d33b9d33f4f537bab,PodSandboxId:aeccc8d10670f70c2887484be37f4bd5f42599bd50c6e35d5dacd0c2ff0b2d5f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead
06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705372540648848184,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-sx897,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f050772-f775-4ee1-8b1e-7db7e2c83fb5,},Annotations:map[string]string{io.kubernetes.container.hash: ec971d3f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2018f9809e3cb61e90992f22bec6870c1dd349160485490c8f8b562369ccd92b,PodSandboxId:4379de7428d6effd872f6687a2c34d72abb128a6d527a25617b5874e31ce066f,Metadata:&ContainerMetadata{Na
me:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705372512299713021,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-690916,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e15ab68460df4a8f909e15fc9444f5d,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae1a246302684304eb4b0325a78d21d75927c673f5bf6dab42f5f6a7e13a1724,PodSandboxId:07b5500785688484bd134edf62a126129c2e223bbd907666bc6dfa098040afcf,Metadata:&ContainerMetadata{Name:etcd,Attempt
:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705372512033043449,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-690916,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57d725b3b32a4da55c4be812e89b7538,},Annotations:map[string]string{io.kubernetes.container.hash: 8bc03e9f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:842fe1c35d9878a3736740b28b8dfddb4c69474338c89befd7e7bd1550543ff5,PodSandboxId:38d3271e610a53aab3b98cb972e39049b17c0424e608d2067162209024c75cec,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db334647
25e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705372511895260337,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-690916,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d4a91283a80cf232fb877d6172eb0c,},Annotations:map[string]string{io.kubernetes.container.hash: a61142ae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ad9676ac4a1b651ca9f614f61a4e788cd41ef1a58cb447881e85401e9e9065d,PodSandboxId:bf4e57c68e4b9daba89b32f5ac057ef5df8582177bb49ebf846b02617af9df13,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7
188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705372511760096258,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-690916,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a065c91fb4801a64cc2cf7907c77ca2,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=fbfd3166-0648-41e6-a26a-e1d119e79d16 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 02:40:18 addons-690916 crio[717]: time="2024-01-16 02:40:18.216603080Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=d1d5e305-00c5-4c84-9a82-a83c593f5426 name=/runtime.v1.RuntimeService/Version
	Jan 16 02:40:18 addons-690916 crio[717]: time="2024-01-16 02:40:18.216718078Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=d1d5e305-00c5-4c84-9a82-a83c593f5426 name=/runtime.v1.RuntimeService/Version
	Jan 16 02:40:18 addons-690916 crio[717]: time="2024-01-16 02:40:18.217673790Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=4af103ea-d3e3-4e20-97c5-e8ecedc4984d name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 02:40:18 addons-690916 crio[717]: time="2024-01-16 02:40:18.218928888Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705372818218910200,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575388,},InodesUsed:&UInt64Value{Value:233,},},},}" file="go-grpc-middleware/chain.go:25" id=4af103ea-d3e3-4e20-97c5-e8ecedc4984d name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 02:40:18 addons-690916 crio[717]: time="2024-01-16 02:40:18.219736848Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=8d1f5a89-b003-4826-97c2-bd400f346b82 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 02:40:18 addons-690916 crio[717]: time="2024-01-16 02:40:18.219860061Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=8d1f5a89-b003-4826-97c2-bd400f346b82 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 02:40:18 addons-690916 crio[717]: time="2024-01-16 02:40:18.220334191Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f34675bcfb69e4ce55b8076891d5695522ea67c3dca248d0cf18a574600c73a4,PodSandboxId:fd6df6ae76a51ce056eb95b9db8797322df4fb1f2b426f7647127c781935970f,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1705372809924826277,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-j29kv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d4710159-b7b2-4773-aa2d-e9085b2ebf12,},Annotations:map[string]string{io.kubernetes.container.hash: 74d8c879,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:897d271fb9c2106feac7b23ef1defceb9c7a2f8311477e3d664063be779d4444,PodSandboxId:b88285b9921d74320138c1f094d34e8faaaf00d16e415b35321235a241d308d1,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,State:CONTAINER_RUNNING,CreatedAt:1705372690128589164,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7ddfbb94ff-d8hns,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 2821060b-5918-451a-a1f1-1be30e4dc855,},An
notations:map[string]string{io.kubernetes.container.hash: af51c193,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:127a3674d6905f9573943cdf4a675504b24f64d399eedb6b8bf1b2d316f79ab2,PodSandboxId:c33ff6bb556c4525e55a63cb534de3412e01b61b75ecf3b68d985605200b5460,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,State:CONTAINER_RUNNING,CreatedAt:1705372670014843645,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: ad3129e2-2f54-4da9-9249-7a6219249f7b,},Annotations:map[string]string{io.kubernetes.container.hash: f73320f9,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89607d06f0194d0e9efb048623d649f868077215ed0c3918be9a2aed11568fb9,PodSandboxId:e3a53adb6d2d8a7b84a482582838740ff25d16f2230fd986fe88769f329fd7b3,Metadata:&ContainerMetadata{Name:patch,Attempt:3,},Image:&ImageSpec{Image:1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1705372631713445163,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: in
gress-nginx-admission-patch-xfgtj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0d6efebd-a4d9-4f7c-973f-a8a6dd365451,},Annotations:map[string]string{io.kubernetes.container.hash: 8f3fa968,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:133d4cee334004618d510c0d82149f879f9ccd1275d540e4c68844f838d16820,PodSandboxId:671d083b2e08245f28c380c66fcd25f5849c60f93c428ea9a42c58423115689b,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1705372623227383531,Labels:map[string]string{io.kubernetes.container.name:
gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-dgnrl,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 002d4fea-92dd-4d9a-b165-26ca685eacc2,},Annotations:map[string]string{io.kubernetes.container.hash: 59dbc75b,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8281a718fc1d2bac136d45b7551b9b0104e9827513aae6dc0c23c9d6044c4d0,PodSandboxId:92a110943caafdd21513773bd1ea7014a71b063c244b9ed60da8dec81e3c2d7f,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c59
65b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1705372601618469827,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-fhp4x,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3c52a4d4-efa0-492b-b36c-e6ed01377234,},Annotations:map[string]string{io.kubernetes.container.hash: 70f997d5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6c6ec0b0a36392e0cae46d27911d6ed0b0a09f7332d1b4bd9bfe04efc75108f,PodSandboxId:9228b7ae80ab9c5e4c84c6c1cc189520f515e6431f8ef2833f59c1b21e9423d3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c44
1c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705372555603754738,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 229f9497-4d92-4b62-8c3c-0010caf4a418,},Annotations:map[string]string{io.kubernetes.container.hash: 7eeb28f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02908854f641e139d91042021648626681a433cc41f649de3ca264c847c56efa,PodSandboxId:4cb056c2e1d514612099291ab28347c6f41c6d8660d831355d546e16aa15c2e8,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},},ImageRef:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e
727bccc0e39f9329310,State:CONTAINER_RUNNING,CreatedAt:1705372553537789247,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-7d5hp,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 751a0dad-e5ce-44e0-888c-bf7e74f9e70e,},Annotations:map[string]string{io.kubernetes.container.hash: 3a103379,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:184c590322abeab5df31eaa66f8a2c1dae362cbb1a6a48269c17dd14ce24bb80,PodSandboxId:99c2aa13adbcb9b540a5e07debfe23199dbff3fda884e3cb17b9051e9b730553,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:regist
ry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705372536611227131,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xmxx2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f4aecbb-0f00-4675-af12-a390c45121da,},Annotations:map[string]string{io.kubernetes.container.hash: 64e553ed,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f7f3c925be8f812052f6b159e0958056ba30f6cc1a7b57d33b9d33f4f537bab,PodSandboxId:aeccc8d10670f70c2887484be37f4bd5f42599bd50c6e35d5dacd0c2ff0b2d5f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead
06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705372540648848184,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-sx897,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f050772-f775-4ee1-8b1e-7db7e2c83fb5,},Annotations:map[string]string{io.kubernetes.container.hash: ec971d3f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2018f9809e3cb61e90992f22bec6870c1dd349160485490c8f8b562369ccd92b,PodSandboxId:4379de7428d6effd872f6687a2c34d72abb128a6d527a25617b5874e31ce066f,Metadata:&ContainerMetadata{Na
me:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705372512299713021,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-690916,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e15ab68460df4a8f909e15fc9444f5d,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae1a246302684304eb4b0325a78d21d75927c673f5bf6dab42f5f6a7e13a1724,PodSandboxId:07b5500785688484bd134edf62a126129c2e223bbd907666bc6dfa098040afcf,Metadata:&ContainerMetadata{Name:etcd,Attempt
:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705372512033043449,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-690916,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57d725b3b32a4da55c4be812e89b7538,},Annotations:map[string]string{io.kubernetes.container.hash: 8bc03e9f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:842fe1c35d9878a3736740b28b8dfddb4c69474338c89befd7e7bd1550543ff5,PodSandboxId:38d3271e610a53aab3b98cb972e39049b17c0424e608d2067162209024c75cec,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db334647
25e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705372511895260337,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-690916,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d4a91283a80cf232fb877d6172eb0c,},Annotations:map[string]string{io.kubernetes.container.hash: a61142ae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ad9676ac4a1b651ca9f614f61a4e788cd41ef1a58cb447881e85401e9e9065d,PodSandboxId:bf4e57c68e4b9daba89b32f5ac057ef5df8582177bb49ebf846b02617af9df13,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7
188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705372511760096258,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-690916,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a065c91fb4801a64cc2cf7907c77ca2,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=8d1f5a89-b003-4826-97c2-bd400f346b82 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f34675bcfb69e       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                      8 seconds ago       Running             hello-world-app           0                   fd6df6ae76a51       hello-world-app-5d77478584-j29kv
	897d271fb9c21       ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67                        2 minutes ago       Running             headlamp                  0                   b88285b9921d7       headlamp-7ddfbb94ff-d8hns
	127a3674d6905       docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686                              2 minutes ago       Running             nginx                     0                   c33ff6bb556c4       nginx
	89607d06f0194       1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb                                                             3 minutes ago       Exited              patch                     3                   e3a53adb6d2d8       ingress-nginx-admission-patch-xfgtj
	133d4cee33400       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06                 3 minutes ago       Running             gcp-auth                  0                   671d083b2e082       gcp-auth-d4c87556c-dgnrl
	a8281a718fc1d       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385   3 minutes ago       Exited              create                    0                   92a110943caaf       ingress-nginx-admission-create-fhp4x
	e6c6ec0b0a363       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   9228b7ae80ab9       storage-provisioner
	02908854f641e       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                              4 minutes ago       Running             yakd                      0                   4cb056c2e1d51       yakd-dashboard-9947fc6bf-7d5hp
	6f7f3c925be8f       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                             4 minutes ago       Running             coredns                   0                   aeccc8d10670f       coredns-5dd5756b68-sx897
	184c590322abe       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                                             4 minutes ago       Running             kube-proxy                0                   99c2aa13adbcb       kube-proxy-xmxx2
	2018f9809e3cb       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                                             5 minutes ago       Running             kube-scheduler            0                   4379de7428d6e       kube-scheduler-addons-690916
	ae1a246302684       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                                             5 minutes ago       Running             etcd                      0                   07b5500785688       etcd-addons-690916
	842fe1c35d987       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                                             5 minutes ago       Running             kube-apiserver            0                   38d3271e610a5       kube-apiserver-addons-690916
	3ad9676ac4a1b       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                                             5 minutes ago       Running             kube-controller-manager   0                   bf4e57c68e4b9       kube-controller-manager-addons-690916
	
	
	==> coredns [6f7f3c925be8f812052f6b159e0958056ba30f6cc1a7b57d33b9d33f4f537bab] <==
	[INFO] 10.244.0.7:32907 - 65452 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000112922s
	[INFO] 10.244.0.7:44906 - 39980 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000127772s
	[INFO] 10.244.0.7:44906 - 49454 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000091408s
	[INFO] 10.244.0.7:37092 - 21701 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000146896s
	[INFO] 10.244.0.7:37092 - 33731 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000146031s
	[INFO] 10.244.0.7:41183 - 61386 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000207535s
	[INFO] 10.244.0.7:41183 - 15305 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000096819s
	[INFO] 10.244.0.7:40903 - 27704 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000215407s
	[INFO] 10.244.0.7:40903 - 44093 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000046709s
	[INFO] 10.244.0.7:59222 - 54350 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000113754s
	[INFO] 10.244.0.7:59222 - 23873 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00040173s
	[INFO] 10.244.0.7:46062 - 14013 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000105584s
	[INFO] 10.244.0.7:46062 - 59064 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000050968s
	[INFO] 10.244.0.7:48049 - 16569 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000098189s
	[INFO] 10.244.0.7:48049 - 5831 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000205837s
	[INFO] 10.244.0.21:39537 - 3698 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.001253988s
	[INFO] 10.244.0.21:48365 - 27208 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.004279226s
	[INFO] 10.244.0.21:45943 - 4523 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00019454s
	[INFO] 10.244.0.21:36022 - 44830 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000240315s
	[INFO] 10.244.0.21:47425 - 58554 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00016223s
	[INFO] 10.244.0.21:51208 - 903 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000164841s
	[INFO] 10.244.0.21:40407 - 64103 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000883035s
	[INFO] 10.244.0.21:40864 - 6673 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 344 0.000540406s
	[INFO] 10.244.0.26:49199 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.001773569s
	[INFO] 10.244.0.26:43264 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000579287s
	
	
	==> describe nodes <==
	Name:               addons-690916
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-690916
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e8fa5f64d0e7272be43ff25ed3826261f0a2578
	                    minikube.k8s.io/name=addons-690916
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_16T02_35_19_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-690916
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Jan 2024 02:35:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-690916
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Jan 2024 02:40:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Jan 2024 02:38:23 +0000   Tue, 16 Jan 2024 02:35:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Jan 2024 02:38:23 +0000   Tue, 16 Jan 2024 02:35:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Jan 2024 02:38:23 +0000   Tue, 16 Jan 2024 02:35:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Jan 2024 02:38:23 +0000   Tue, 16 Jan 2024 02:35:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.234
	  Hostname:    addons-690916
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914496Ki
	  pods:               110
	System Info:
	  Machine ID:                 a3d36b7529ca4c47b9019dae4cbc1e75
	  System UUID:                a3d36b75-29ca-4c47-b901-9dae4cbc1e75
	  Boot ID:                    22600748-2e93-4f59-ac28-56f962fab411
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-j29kv         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m35s
	  gcp-auth                    gcp-auth-d4c87556c-dgnrl                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m33s
	  headlamp                    headlamp-7ddfbb94ff-d8hns                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m14s
	  kube-system                 coredns-5dd5756b68-sx897                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m45s
	  kube-system                 etcd-addons-690916                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m59s
	  kube-system                 kube-apiserver-addons-690916             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m59s
	  kube-system                 kube-controller-manager-addons-690916    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m
	  kube-system                 kube-proxy-xmxx2                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m46s
	  kube-system                 kube-scheduler-addons-690916             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m59s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m39s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-7d5hp           0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     4m38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             298Mi (7%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m29s  kube-proxy       
	  Normal  Starting                 4m59s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m59s  kubelet          Node addons-690916 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m59s  kubelet          Node addons-690916 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m59s  kubelet          Node addons-690916 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m59s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                4m58s  kubelet          Node addons-690916 status is now: NodeReady
	  Normal  RegisteredNode           4m47s  node-controller  Node addons-690916 event: Registered Node addons-690916 in Controller
	
	
	==> dmesg <==
	[  +5.049549] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.439417] systemd-fstab-generator[643]: Ignoring "noauto" for root device
	[  +0.117791] systemd-fstab-generator[654]: Ignoring "noauto" for root device
	[  +0.150699] systemd-fstab-generator[667]: Ignoring "noauto" for root device
	[  +0.121795] systemd-fstab-generator[678]: Ignoring "noauto" for root device
	[  +0.235190] systemd-fstab-generator[703]: Ignoring "noauto" for root device
	[Jan16 02:35] systemd-fstab-generator[910]: Ignoring "noauto" for root device
	[  +9.286169] systemd-fstab-generator[1243]: Ignoring "noauto" for root device
	[ +21.396173] kauditd_printk_skb: 30 callbacks suppressed
	[  +6.152637] kauditd_printk_skb: 34 callbacks suppressed
	[  +5.041772] kauditd_printk_skb: 12 callbacks suppressed
	[Jan16 02:36] kauditd_printk_skb: 6 callbacks suppressed
	[ +24.749272] kauditd_printk_skb: 22 callbacks suppressed
	[Jan16 02:37] kauditd_printk_skb: 30 callbacks suppressed
	[ +15.463138] kauditd_printk_skb: 22 callbacks suppressed
	[ +18.891536] kauditd_printk_skb: 13 callbacks suppressed
	[  +5.955695] kauditd_printk_skb: 24 callbacks suppressed
	[  +5.844483] kauditd_printk_skb: 24 callbacks suppressed
	[  +9.256613] kauditd_printk_skb: 1 callbacks suppressed
	[Jan16 02:38] kauditd_printk_skb: 6 callbacks suppressed
	[  +6.934740] kauditd_printk_skb: 3 callbacks suppressed
	[ +16.409163] kauditd_printk_skb: 12 callbacks suppressed
	[Jan16 02:40] kauditd_printk_skb: 5 callbacks suppressed
	
	
	==> etcd [ae1a246302684304eb4b0325a78d21d75927c673f5bf6dab42f5f6a7e13a1724] <==
	{"level":"info","ts":"2024-01-16T02:36:51.379594Z","caller":"traceutil/trace.go:171","msg":"trace[58455760] transaction","detail":"{read_only:false; response_revision:1089; number_of_response:1; }","duration":"148.119514ms","start":"2024-01-16T02:36:51.231467Z","end":"2024-01-16T02:36:51.379586Z","steps":["trace[58455760] 'process raft request'  (duration: 147.639631ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-16T02:36:55.605729Z","caller":"traceutil/trace.go:171","msg":"trace[57172500] transaction","detail":"{read_only:false; response_revision:1113; number_of_response:1; }","duration":"179.78006ms","start":"2024-01-16T02:36:55.425932Z","end":"2024-01-16T02:36:55.605712Z","steps":["trace[57172500] 'process raft request'  (duration: 179.497798ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T02:36:59.182603Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"225.166808ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-16T02:36:59.182713Z","caller":"traceutil/trace.go:171","msg":"trace[1938315752] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1125; }","duration":"225.28771ms","start":"2024-01-16T02:36:58.957415Z","end":"2024-01-16T02:36:59.182703Z","steps":["trace[1938315752] 'range keys from in-memory index tree'  (duration: 224.997677ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T02:36:59.182949Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"145.14979ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:82382"}
	{"level":"info","ts":"2024-01-16T02:36:59.183131Z","caller":"traceutil/trace.go:171","msg":"trace[1196154243] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1125; }","duration":"145.333811ms","start":"2024-01-16T02:36:59.03779Z","end":"2024-01-16T02:36:59.183123Z","steps":["trace[1196154243] 'range keys from in-memory index tree'  (duration: 144.81325ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-16T02:37:33.254356Z","caller":"traceutil/trace.go:171","msg":"trace[650771584] transaction","detail":"{read_only:false; response_revision:1317; number_of_response:1; }","duration":"338.004943ms","start":"2024-01-16T02:37:32.916317Z","end":"2024-01-16T02:37:33.254322Z","steps":["trace[650771584] 'process raft request'  (duration: 337.869484ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T02:37:33.254819Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-16T02:37:32.916223Z","time spent":"338.409046ms","remote":"127.0.0.1:59906","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4259,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/local-path-storage/helper-pod-delete-pvc-e5d0b07b-ee14-47a2-bd87-8d60dd23d5f0\" mod_revision:1314 > success:<request_put:<key:\"/registry/pods/local-path-storage/helper-pod-delete-pvc-e5d0b07b-ee14-47a2-bd87-8d60dd23d5f0\" value_size:4159 >> failure:<request_range:<key:\"/registry/pods/local-path-storage/helper-pod-delete-pvc-e5d0b07b-ee14-47a2-bd87-8d60dd23d5f0\" > >"}
	{"level":"info","ts":"2024-01-16T02:37:46.200865Z","caller":"traceutil/trace.go:171","msg":"trace[1462845379] linearizableReadLoop","detail":"{readStateIndex:1494; appliedIndex:1493; }","duration":"244.487173ms","start":"2024-01-16T02:37:45.956355Z","end":"2024-01-16T02:37:46.200842Z","steps":["trace[1462845379] 'read index received'  (duration: 244.354838ms)","trace[1462845379] 'applied index is now lower than readState.Index'  (duration: 131.69µs)"],"step_count":2}
	{"level":"info","ts":"2024-01-16T02:37:46.201222Z","caller":"traceutil/trace.go:171","msg":"trace[350795863] transaction","detail":"{read_only:false; response_revision:1445; number_of_response:1; }","duration":"305.67954ms","start":"2024-01-16T02:37:45.895532Z","end":"2024-01-16T02:37:46.201211Z","steps":["trace[350795863] 'process raft request'  (duration: 305.223981ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T02:37:46.201305Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"158.361139ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/ingress-nginx/ingress-nginx-controller-69cff4fd79-xpn74.17aab35d3ad68c0c\" ","response":"range_response_count:1 size:797"}
	{"level":"info","ts":"2024-01-16T02:37:46.201387Z","caller":"traceutil/trace.go:171","msg":"trace[1061812641] range","detail":"{range_begin:/registry/events/ingress-nginx/ingress-nginx-controller-69cff4fd79-xpn74.17aab35d3ad68c0c; range_end:; response_count:1; response_revision:1445; }","duration":"158.443973ms","start":"2024-01-16T02:37:46.042928Z","end":"2024-01-16T02:37:46.201372Z","steps":["trace[1061812641] 'agreement among raft nodes before linearized reading'  (duration: 158.319178ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T02:37:46.201595Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"127.861243ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2024-01-16T02:37:46.201644Z","caller":"traceutil/trace.go:171","msg":"trace[1511015965] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1445; }","duration":"127.912824ms","start":"2024-01-16T02:37:46.073725Z","end":"2024-01-16T02:37:46.201638Z","steps":["trace[1511015965] 'agreement among raft nodes before linearized reading'  (duration: 127.833949ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T02:37:46.20161Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"245.271654ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-16T02:37:46.201734Z","caller":"traceutil/trace.go:171","msg":"trace[566534863] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1445; }","duration":"245.39535ms","start":"2024-01-16T02:37:45.95633Z","end":"2024-01-16T02:37:46.201726Z","steps":["trace[566534863] 'agreement among raft nodes before linearized reading'  (duration: 245.258741ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T02:37:46.201773Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"155.979951ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:3 size:8631"}
	{"level":"info","ts":"2024-01-16T02:37:46.20182Z","caller":"traceutil/trace.go:171","msg":"trace[369310229] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:3; response_revision:1445; }","duration":"156.027896ms","start":"2024-01-16T02:37:46.045786Z","end":"2024-01-16T02:37:46.201814Z","steps":["trace[369310229] 'agreement among raft nodes before linearized reading'  (duration: 155.947713ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T02:37:46.201328Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-16T02:37:45.895515Z","time spent":"305.759393ms","remote":"127.0.0.1:59890","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":573,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/namespaces/gadget\" mod_revision:406 > success:<request_put:<key:\"/registry/namespaces/gadget\" value_size:538 >> failure:<request_range:<key:\"/registry/namespaces/gadget\" > >"}
	{"level":"info","ts":"2024-01-16T02:37:50.688355Z","caller":"traceutil/trace.go:171","msg":"trace[48682353] transaction","detail":"{read_only:false; response_revision:1481; number_of_response:1; }","duration":"150.866095ms","start":"2024-01-16T02:37:50.537458Z","end":"2024-01-16T02:37:50.688324Z","steps":["trace[48682353] 'process raft request'  (duration: 150.773871ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T02:38:08.515206Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"307.375268ms","expected-duration":"100ms","prefix":"","request":"header:<ID:41813396940032710 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.39.234\" mod_revision:1557 > success:<request_put:<key:\"/registry/masterleases/192.168.39.234\" value_size:67 lease:41813396940032708 >> failure:<request_range:<key:\"/registry/masterleases/192.168.39.234\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-01-16T02:38:08.515644Z","caller":"traceutil/trace.go:171","msg":"trace[684385293] transaction","detail":"{read_only:false; response_revision:1625; number_of_response:1; }","duration":"424.542025ms","start":"2024-01-16T02:38:08.091079Z","end":"2024-01-16T02:38:08.515621Z","steps":["trace[684385293] 'process raft request'  (duration: 116.659652ms)","trace[684385293] 'compare'  (duration: 307.08136ms)"],"step_count":2}
	{"level":"warn","ts":"2024-01-16T02:38:08.515736Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-16T02:38:08.091062Z","time spent":"424.640258ms","remote":"127.0.0.1:59870","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":119,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/masterleases/192.168.39.234\" mod_revision:1557 > success:<request_put:<key:\"/registry/masterleases/192.168.39.234\" value_size:67 lease:41813396940032708 >> failure:<request_range:<key:\"/registry/masterleases/192.168.39.234\" > >"}
	{"level":"info","ts":"2024-01-16T02:38:39.658611Z","caller":"traceutil/trace.go:171","msg":"trace[1742334770] transaction","detail":"{read_only:false; response_revision:1793; number_of_response:1; }","duration":"133.197051ms","start":"2024-01-16T02:38:39.525392Z","end":"2024-01-16T02:38:39.658589Z","steps":["trace[1742334770] 'process raft request'  (duration: 132.831745ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-16T02:38:44.719758Z","caller":"traceutil/trace.go:171","msg":"trace[722586807] transaction","detail":"{read_only:false; response_revision:1797; number_of_response:1; }","duration":"227.051162ms","start":"2024-01-16T02:38:44.49269Z","end":"2024-01-16T02:38:44.719741Z","steps":["trace[722586807] 'process raft request'  (duration: 226.894108ms)"],"step_count":1}
	
	
	==> gcp-auth [133d4cee334004618d510c0d82149f879f9ccd1275d540e4c68844f838d16820] <==
	2024/01/16 02:37:03 GCP Auth Webhook started!
	2024/01/16 02:37:19 Ready to marshal response ...
	2024/01/16 02:37:19 Ready to write response ...
	2024/01/16 02:37:19 Ready to marshal response ...
	2024/01/16 02:37:19 Ready to write response ...
	2024/01/16 02:37:29 Ready to marshal response ...
	2024/01/16 02:37:29 Ready to write response ...
	2024/01/16 02:37:30 Ready to marshal response ...
	2024/01/16 02:37:30 Ready to write response ...
	2024/01/16 02:37:30 Ready to marshal response ...
	2024/01/16 02:37:30 Ready to write response ...
	2024/01/16 02:37:38 Ready to marshal response ...
	2024/01/16 02:37:38 Ready to write response ...
	2024/01/16 02:37:43 Ready to marshal response ...
	2024/01/16 02:37:43 Ready to write response ...
	2024/01/16 02:38:04 Ready to marshal response ...
	2024/01/16 02:38:04 Ready to write response ...
	2024/01/16 02:38:04 Ready to marshal response ...
	2024/01/16 02:38:04 Ready to write response ...
	2024/01/16 02:38:04 Ready to marshal response ...
	2024/01/16 02:38:04 Ready to write response ...
	2024/01/16 02:38:07 Ready to marshal response ...
	2024/01/16 02:38:07 Ready to write response ...
	2024/01/16 02:40:07 Ready to marshal response ...
	2024/01/16 02:40:07 Ready to write response ...
	
	
	==> kernel <==
	 02:40:18 up 5 min,  0 users,  load average: 1.17, 1.86, 1.01
	Linux addons-690916 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [842fe1c35d9878a3736740b28b8dfddb4c69474338c89befd7e7bd1550543ff5] <==
	I0116 02:37:46.496659       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0116 02:37:47.556694       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0116 02:37:54.693867       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0116 02:38:04.441652       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.106.169.198"}
	I0116 02:38:14.767299       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0116 02:38:26.002181       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0116 02:38:26.007255       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0116 02:38:26.019808       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0116 02:38:26.020027       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0116 02:38:26.038230       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0116 02:38:26.038338       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0116 02:38:26.057906       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0116 02:38:26.058764       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0116 02:38:26.059727       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0116 02:38:26.059783       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0116 02:38:26.060520       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0116 02:38:26.060587       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0116 02:38:26.087108       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0116 02:38:26.087220       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0116 02:38:26.131393       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0116 02:38:26.131513       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0116 02:38:27.060749       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0116 02:38:27.132818       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0116 02:38:27.172384       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0116 02:40:07.522641       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.104.152.191"}
	
	
	==> kube-controller-manager [3ad9676ac4a1b651ca9f614f61a4e788cd41ef1a58cb447881e85401e9e9065d] <==
	W0116 02:39:09.580263       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0116 02:39:09.580388       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0116 02:39:12.321559       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0116 02:39:12.321615       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0116 02:39:15.386108       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0116 02:39:15.386410       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0116 02:39:49.050095       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0116 02:39:49.050267       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0116 02:39:49.282583       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0116 02:39:49.282731       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0116 02:39:51.747932       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0116 02:39:51.748119       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0116 02:39:55.318191       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0116 02:39:55.318317       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0116 02:40:07.265136       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I0116 02:40:07.327903       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-j29kv"
	I0116 02:40:07.340364       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="75.880046ms"
	I0116 02:40:07.380521       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="40.053336ms"
	I0116 02:40:07.380624       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="49.995µs"
	I0116 02:40:07.381126       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="49.03µs"
	I0116 02:40:10.152241       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-69cff4fd79" duration="8.36µs"
	I0116 02:40:10.155582       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0116 02:40:10.170573       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I0116 02:40:10.855034       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="15.255521ms"
	I0116 02:40:10.855274       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="108.938µs"
	
	
	==> kube-proxy [184c590322abeab5df31eaa66f8a2c1dae362cbb1a6a48269c17dd14ce24bb80] <==
	I0116 02:35:46.632651       1 server_others.go:69] "Using iptables proxy"
	I0116 02:35:46.790916       1 node.go:141] Successfully retrieved node IP: 192.168.39.234
	I0116 02:35:49.374568       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0116 02:35:49.374616       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0116 02:35:49.424073       1 server_others.go:152] "Using iptables Proxier"
	I0116 02:35:49.424144       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0116 02:35:49.425226       1 server.go:846] "Version info" version="v1.28.4"
	I0116 02:35:49.425241       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0116 02:35:49.606936       1 config.go:188] "Starting service config controller"
	I0116 02:35:49.615848       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0116 02:35:49.626254       1 config.go:97] "Starting endpoint slice config controller"
	I0116 02:35:49.626555       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0116 02:35:49.627752       1 config.go:315] "Starting node config controller"
	I0116 02:35:49.627762       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0116 02:35:49.728917       1 shared_informer.go:318] Caches are synced for node config
	I0116 02:35:49.729036       1 shared_informer.go:318] Caches are synced for service config
	I0116 02:35:49.729059       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [2018f9809e3cb61e90992f22bec6870c1dd349160485490c8f8b562369ccd92b] <==
	W0116 02:35:16.128190       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0116 02:35:16.128241       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0116 02:35:17.079075       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0116 02:35:17.079126       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0116 02:35:17.093699       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0116 02:35:17.093757       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0116 02:35:17.094845       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0116 02:35:17.094892       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0116 02:35:17.109489       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0116 02:35:17.109559       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0116 02:35:17.197253       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0116 02:35:17.197311       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0116 02:35:17.215911       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0116 02:35:17.216090       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0116 02:35:17.242418       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0116 02:35:17.242484       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0116 02:35:17.276125       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0116 02:35:17.276220       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0116 02:35:17.331761       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0116 02:35:17.331863       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0116 02:35:17.454836       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0116 02:35:17.454938       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0116 02:35:17.605201       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0116 02:35:17.605296       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0116 02:35:20.307300       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-01-16 02:34:47 UTC, ends at Tue 2024-01-16 02:40:18 UTC. --
	Jan 16 02:40:07 addons-690916 kubelet[1250]: I0116 02:40:07.346208    1250 memory_manager.go:346] "RemoveStaleState removing state" podUID="8943cbaa-9281-49b8-bad6-0eea46d0016c" containerName="liveness-probe"
	Jan 16 02:40:07 addons-690916 kubelet[1250]: I0116 02:40:07.390818    1250 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vpl9\" (UniqueName: \"kubernetes.io/projected/d4710159-b7b2-4773-aa2d-e9085b2ebf12-kube-api-access-4vpl9\") pod \"hello-world-app-5d77478584-j29kv\" (UID: \"d4710159-b7b2-4773-aa2d-e9085b2ebf12\") " pod="default/hello-world-app-5d77478584-j29kv"
	Jan 16 02:40:07 addons-690916 kubelet[1250]: I0116 02:40:07.390870    1250 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/d4710159-b7b2-4773-aa2d-e9085b2ebf12-gcp-creds\") pod \"hello-world-app-5d77478584-j29kv\" (UID: \"d4710159-b7b2-4773-aa2d-e9085b2ebf12\") " pod="default/hello-world-app-5d77478584-j29kv"
	Jan 16 02:40:08 addons-690916 kubelet[1250]: I0116 02:40:08.802314    1250 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2cmd\" (UniqueName: \"kubernetes.io/projected/f9b4803f-094e-4494-8a75-581074b26c99-kube-api-access-k2cmd\") pod \"f9b4803f-094e-4494-8a75-581074b26c99\" (UID: \"f9b4803f-094e-4494-8a75-581074b26c99\") "
	Jan 16 02:40:08 addons-690916 kubelet[1250]: I0116 02:40:08.806329    1250 scope.go:117] "RemoveContainer" containerID="9a92943701fcf4a143dbcbcde55159cb15eb5496cb813169dac7a31eb302d5d6"
	Jan 16 02:40:08 addons-690916 kubelet[1250]: I0116 02:40:08.813296    1250 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9b4803f-094e-4494-8a75-581074b26c99-kube-api-access-k2cmd" (OuterVolumeSpecName: "kube-api-access-k2cmd") pod "f9b4803f-094e-4494-8a75-581074b26c99" (UID: "f9b4803f-094e-4494-8a75-581074b26c99"). InnerVolumeSpecName "kube-api-access-k2cmd". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jan 16 02:40:08 addons-690916 kubelet[1250]: I0116 02:40:08.903640    1250 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-k2cmd\" (UniqueName: \"kubernetes.io/projected/f9b4803f-094e-4494-8a75-581074b26c99-kube-api-access-k2cmd\") on node \"addons-690916\" DevicePath \"\""
	Jan 16 02:40:09 addons-690916 kubelet[1250]: I0116 02:40:09.016228    1250 scope.go:117] "RemoveContainer" containerID="9a92943701fcf4a143dbcbcde55159cb15eb5496cb813169dac7a31eb302d5d6"
	Jan 16 02:40:09 addons-690916 kubelet[1250]: E0116 02:40:09.028434    1250 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a92943701fcf4a143dbcbcde55159cb15eb5496cb813169dac7a31eb302d5d6\": container with ID starting with 9a92943701fcf4a143dbcbcde55159cb15eb5496cb813169dac7a31eb302d5d6 not found: ID does not exist" containerID="9a92943701fcf4a143dbcbcde55159cb15eb5496cb813169dac7a31eb302d5d6"
	Jan 16 02:40:09 addons-690916 kubelet[1250]: I0116 02:40:09.028576    1250 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a92943701fcf4a143dbcbcde55159cb15eb5496cb813169dac7a31eb302d5d6"} err="failed to get container status \"9a92943701fcf4a143dbcbcde55159cb15eb5496cb813169dac7a31eb302d5d6\": rpc error: code = NotFound desc = could not find container \"9a92943701fcf4a143dbcbcde55159cb15eb5496cb813169dac7a31eb302d5d6\": container with ID starting with 9a92943701fcf4a143dbcbcde55159cb15eb5496cb813169dac7a31eb302d5d6 not found: ID does not exist"
	Jan 16 02:40:09 addons-690916 kubelet[1250]: I0116 02:40:09.683941    1250 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="f9b4803f-094e-4494-8a75-581074b26c99" path="/var/lib/kubelet/pods/f9b4803f-094e-4494-8a75-581074b26c99/volumes"
	Jan 16 02:40:10 addons-690916 kubelet[1250]: I0116 02:40:10.842859    1250 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/hello-world-app-5d77478584-j29kv" podStartSLOduration=2.399747482 podCreationTimestamp="2024-01-16 02:40:07 +0000 UTC" firstStartedPulling="2024-01-16 02:40:08.452217502 +0000 UTC m=+288.920459265" lastFinishedPulling="2024-01-16 02:40:09.895254063 +0000 UTC m=+290.363495817" observedRunningTime="2024-01-16 02:40:10.838551639 +0000 UTC m=+291.306793409" watchObservedRunningTime="2024-01-16 02:40:10.842784034 +0000 UTC m=+291.311025805"
	Jan 16 02:40:11 addons-690916 kubelet[1250]: I0116 02:40:11.680467    1250 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="0d6efebd-a4d9-4f7c-973f-a8a6dd365451" path="/var/lib/kubelet/pods/0d6efebd-a4d9-4f7c-973f-a8a6dd365451/volumes"
	Jan 16 02:40:11 addons-690916 kubelet[1250]: I0116 02:40:11.681120    1250 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="3c52a4d4-efa0-492b-b36c-e6ed01377234" path="/var/lib/kubelet/pods/3c52a4d4-efa0-492b-b36c-e6ed01377234/volumes"
	Jan 16 02:40:13 addons-690916 kubelet[1250]: I0116 02:40:13.538319    1250 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgj79\" (UniqueName: \"kubernetes.io/projected/b438e718-40a6-4848-af5d-e033441e70db-kube-api-access-pgj79\") pod \"b438e718-40a6-4848-af5d-e033441e70db\" (UID: \"b438e718-40a6-4848-af5d-e033441e70db\") "
	Jan 16 02:40:13 addons-690916 kubelet[1250]: I0116 02:40:13.538428    1250 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b438e718-40a6-4848-af5d-e033441e70db-webhook-cert\") pod \"b438e718-40a6-4848-af5d-e033441e70db\" (UID: \"b438e718-40a6-4848-af5d-e033441e70db\") "
	Jan 16 02:40:13 addons-690916 kubelet[1250]: I0116 02:40:13.543451    1250 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b438e718-40a6-4848-af5d-e033441e70db-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "b438e718-40a6-4848-af5d-e033441e70db" (UID: "b438e718-40a6-4848-af5d-e033441e70db"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 16 02:40:13 addons-690916 kubelet[1250]: I0116 02:40:13.545210    1250 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b438e718-40a6-4848-af5d-e033441e70db-kube-api-access-pgj79" (OuterVolumeSpecName: "kube-api-access-pgj79") pod "b438e718-40a6-4848-af5d-e033441e70db" (UID: "b438e718-40a6-4848-af5d-e033441e70db"). InnerVolumeSpecName "kube-api-access-pgj79". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jan 16 02:40:13 addons-690916 kubelet[1250]: I0116 02:40:13.639176    1250 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-pgj79\" (UniqueName: \"kubernetes.io/projected/b438e718-40a6-4848-af5d-e033441e70db-kube-api-access-pgj79\") on node \"addons-690916\" DevicePath \"\""
	Jan 16 02:40:13 addons-690916 kubelet[1250]: I0116 02:40:13.639218    1250 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b438e718-40a6-4848-af5d-e033441e70db-webhook-cert\") on node \"addons-690916\" DevicePath \"\""
	Jan 16 02:40:13 addons-690916 kubelet[1250]: I0116 02:40:13.680286    1250 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="b438e718-40a6-4848-af5d-e033441e70db" path="/var/lib/kubelet/pods/b438e718-40a6-4848-af5d-e033441e70db/volumes"
	Jan 16 02:40:13 addons-690916 kubelet[1250]: I0116 02:40:13.839574    1250 scope.go:117] "RemoveContainer" containerID="58f1102e51c10f54322f74b18594145d5134bbdde94644256b9ae7f448ded2c6"
	Jan 16 02:40:13 addons-690916 kubelet[1250]: I0116 02:40:13.862639    1250 scope.go:117] "RemoveContainer" containerID="58f1102e51c10f54322f74b18594145d5134bbdde94644256b9ae7f448ded2c6"
	Jan 16 02:40:13 addons-690916 kubelet[1250]: E0116 02:40:13.863533    1250 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58f1102e51c10f54322f74b18594145d5134bbdde94644256b9ae7f448ded2c6\": container with ID starting with 58f1102e51c10f54322f74b18594145d5134bbdde94644256b9ae7f448ded2c6 not found: ID does not exist" containerID="58f1102e51c10f54322f74b18594145d5134bbdde94644256b9ae7f448ded2c6"
	Jan 16 02:40:13 addons-690916 kubelet[1250]: I0116 02:40:13.863580    1250 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58f1102e51c10f54322f74b18594145d5134bbdde94644256b9ae7f448ded2c6"} err="failed to get container status \"58f1102e51c10f54322f74b18594145d5134bbdde94644256b9ae7f448ded2c6\": rpc error: code = NotFound desc = could not find container \"58f1102e51c10f54322f74b18594145d5134bbdde94644256b9ae7f448ded2c6\": container with ID starting with 58f1102e51c10f54322f74b18594145d5134bbdde94644256b9ae7f448ded2c6 not found: ID does not exist"
	
	
	==> storage-provisioner [e6c6ec0b0a36392e0cae46d27911d6ed0b0a09f7332d1b4bd9bfe04efc75108f] <==
	I0116 02:35:57.310640       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0116 02:35:57.364062       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0116 02:35:57.364137       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0116 02:35:57.414249       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0116 02:35:57.426483       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2128540a-a65b-4f94-bf92-8021408d0166", APIVersion:"v1", ResourceVersion:"860", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-690916_86a594c8-0437-4caa-8dfc-bb9b0737e19a became leader
	I0116 02:35:57.427904       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-690916_86a594c8-0437-4caa-8dfc-bb9b0737e19a!
	I0116 02:35:57.635604       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-690916_86a594c8-0437-4caa-8dfc-bb9b0737e19a!
	E0116 02:38:18.327138       1 controller.go:1050] claim "64f05e98-a575-4f45-9e7a-99061317ddc6" in work queue no longer exists
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-690916 -n addons-690916
helpers_test.go:261: (dbg) Run:  kubectl --context addons-690916 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (156.08s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (155.3s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-690916
addons_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-690916: exit status 82 (2m1.504704494s)

                                                
                                                
-- stdout --
	* Stopping node "addons-690916"  ...
	* Stopping node "addons-690916"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:174: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-690916" : exit status 82
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-690916
addons_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-690916: exit status 11 (21.503652379s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.234:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:178: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-690916" : exit status 11
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-690916
addons_test.go:180: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-690916: exit status 11 (6.145030826s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.234:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:182: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-690916" : exit status 11
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-690916
addons_test.go:185: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-690916: exit status 11 (6.142749141s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.234:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:187: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-690916" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (155.30s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (166.94s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:207: (dbg) Run:  kubectl --context ingress-addon-legacy-873808 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:207: (dbg) Done: kubectl --context ingress-addon-legacy-873808 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (9.622255604s)
addons_test.go:232: (dbg) Run:  kubectl --context ingress-addon-legacy-873808 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context ingress-addon-legacy-873808 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [6170b55d-0a92-4ef9-8ac8-5b5318785c81] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [6170b55d-0a92-4ef9-8ac8-5b5318785c81] Running
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 9.13825706s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-873808 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E0116 02:50:03.090781  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/addons-690916/client.crt: no such file or directory
E0116 02:51:49.160885  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/functional-193417/client.crt: no such file or directory
E0116 02:51:49.166200  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/functional-193417/client.crt: no such file or directory
E0116 02:51:49.176516  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/functional-193417/client.crt: no such file or directory
E0116 02:51:49.196868  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/functional-193417/client.crt: no such file or directory
E0116 02:51:49.237225  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/functional-193417/client.crt: no such file or directory
E0116 02:51:49.317838  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/functional-193417/client.crt: no such file or directory
E0116 02:51:49.478230  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/functional-193417/client.crt: no such file or directory
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ingress-addon-legacy-873808 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m12.15077503s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context ingress-addon-legacy-873808 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
E0116 02:51:49.798817  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/functional-193417/client.crt: no such file or directory
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-873808 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.242
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-873808 addons disable ingress-dns --alsologtostderr -v=1
E0116 02:51:50.439986  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/functional-193417/client.crt: no such file or directory
E0116 02:51:51.720224  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/functional-193417/client.crt: no such file or directory
E0116 02:51:54.281094  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/functional-193417/client.crt: no such file or directory
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-873808 addons disable ingress-dns --alsologtostderr -v=1: (5.294759438s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-873808 addons disable ingress --alsologtostderr -v=1
E0116 02:51:59.401925  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/functional-193417/client.crt: no such file or directory
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-873808 addons disable ingress --alsologtostderr -v=1: (7.592412311s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-873808 -n ingress-addon-legacy-873808
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-873808 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-873808 logs -n 25: (1.22922097s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                    |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| image   | functional-193417 image load --daemon                                     | functional-193417           | jenkins | v1.32.0 | 16 Jan 24 02:47 UTC | 16 Jan 24 02:47 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-193417                  |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image   | functional-193417 image ls                                                | functional-193417           | jenkins | v1.32.0 | 16 Jan 24 02:47 UTC | 16 Jan 24 02:47 UTC |
	| image   | functional-193417 image load --daemon                                     | functional-193417           | jenkins | v1.32.0 | 16 Jan 24 02:47 UTC | 16 Jan 24 02:47 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-193417                  |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image   | functional-193417 image ls                                                | functional-193417           | jenkins | v1.32.0 | 16 Jan 24 02:47 UTC | 16 Jan 24 02:47 UTC |
	| image   | functional-193417 image save                                              | functional-193417           | jenkins | v1.32.0 | 16 Jan 24 02:47 UTC | 16 Jan 24 02:47 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-193417                  |                             |         |         |                     |                     |
	|         | /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image   | functional-193417 image rm                                                | functional-193417           | jenkins | v1.32.0 | 16 Jan 24 02:47 UTC | 16 Jan 24 02:47 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-193417                  |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image   | functional-193417 image ls                                                | functional-193417           | jenkins | v1.32.0 | 16 Jan 24 02:47 UTC | 16 Jan 24 02:47 UTC |
	| image   | functional-193417 image load                                              | functional-193417           | jenkins | v1.32.0 | 16 Jan 24 02:47 UTC | 16 Jan 24 02:47 UTC |
	|         | /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image   | functional-193417 image ls                                                | functional-193417           | jenkins | v1.32.0 | 16 Jan 24 02:47 UTC | 16 Jan 24 02:47 UTC |
	| image   | functional-193417 image save --daemon                                     | functional-193417           | jenkins | v1.32.0 | 16 Jan 24 02:47 UTC | 16 Jan 24 02:47 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-193417                  |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| ssh     | functional-193417 ssh pgrep                                               | functional-193417           | jenkins | v1.32.0 | 16 Jan 24 02:47 UTC |                     |
	|         | buildkitd                                                                 |                             |         |         |                     |                     |
	| image   | functional-193417                                                         | functional-193417           | jenkins | v1.32.0 | 16 Jan 24 02:47 UTC | 16 Jan 24 02:47 UTC |
	|         | image ls --format yaml                                                    |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image   | functional-193417                                                         | functional-193417           | jenkins | v1.32.0 | 16 Jan 24 02:47 UTC | 16 Jan 24 02:47 UTC |
	|         | image ls --format short                                                   |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image   | functional-193417 image build -t                                          | functional-193417           | jenkins | v1.32.0 | 16 Jan 24 02:47 UTC | 16 Jan 24 02:47 UTC |
	|         | localhost/my-image:functional-193417                                      |                             |         |         |                     |                     |
	|         | testdata/build --alsologtostderr                                          |                             |         |         |                     |                     |
	| image   | functional-193417                                                         | functional-193417           | jenkins | v1.32.0 | 16 Jan 24 02:47 UTC | 16 Jan 24 02:47 UTC |
	|         | image ls --format json                                                    |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image   | functional-193417                                                         | functional-193417           | jenkins | v1.32.0 | 16 Jan 24 02:47 UTC | 16 Jan 24 02:47 UTC |
	|         | image ls --format table                                                   |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image   | functional-193417 image ls                                                | functional-193417           | jenkins | v1.32.0 | 16 Jan 24 02:47 UTC | 16 Jan 24 02:47 UTC |
	| delete  | -p functional-193417                                                      | functional-193417           | jenkins | v1.32.0 | 16 Jan 24 02:47 UTC | 16 Jan 24 02:47 UTC |
	| start   | -p ingress-addon-legacy-873808                                            | ingress-addon-legacy-873808 | jenkins | v1.32.0 | 16 Jan 24 02:47 UTC | 16 Jan 24 02:49 UTC |
	|         | --kubernetes-version=v1.18.20                                             |                             |         |         |                     |                     |
	|         | --memory=4096 --wait=true                                                 |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                         |                             |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                                                        |                             |         |         |                     |                     |
	|         | --container-runtime=crio                                                  |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-873808                                               | ingress-addon-legacy-873808 | jenkins | v1.32.0 | 16 Jan 24 02:49 UTC | 16 Jan 24 02:49 UTC |
	|         | addons enable ingress                                                     |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=5                                                    |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-873808                                               | ingress-addon-legacy-873808 | jenkins | v1.32.0 | 16 Jan 24 02:49 UTC | 16 Jan 24 02:49 UTC |
	|         | addons enable ingress-dns                                                 |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=5                                                    |                             |         |         |                     |                     |
	| ssh     | ingress-addon-legacy-873808                                               | ingress-addon-legacy-873808 | jenkins | v1.32.0 | 16 Jan 24 02:49 UTC |                     |
	|         | ssh curl -s http://127.0.0.1/                                             |                             |         |         |                     |                     |
	|         | -H 'Host: nginx.example.com'                                              |                             |         |         |                     |                     |
	| ip      | ingress-addon-legacy-873808 ip                                            | ingress-addon-legacy-873808 | jenkins | v1.32.0 | 16 Jan 24 02:51 UTC | 16 Jan 24 02:51 UTC |
	| addons  | ingress-addon-legacy-873808                                               | ingress-addon-legacy-873808 | jenkins | v1.32.0 | 16 Jan 24 02:51 UTC | 16 Jan 24 02:51 UTC |
	|         | addons disable ingress-dns                                                |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                    |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-873808                                               | ingress-addon-legacy-873808 | jenkins | v1.32.0 | 16 Jan 24 02:51 UTC | 16 Jan 24 02:52 UTC |
	|         | addons disable ingress                                                    |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                    |                             |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/16 02:47:43
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0116 02:47:43.180342  484156 out.go:296] Setting OutFile to fd 1 ...
	I0116 02:47:43.180500  484156 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:47:43.180514  484156 out.go:309] Setting ErrFile to fd 2...
	I0116 02:47:43.180521  484156 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:47:43.180742  484156 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17965-468241/.minikube/bin
	I0116 02:47:43.181349  484156 out.go:303] Setting JSON to false
	I0116 02:47:43.182334  484156 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":12615,"bootTime":1705360648,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0116 02:47:43.182407  484156 start.go:138] virtualization: kvm guest
	I0116 02:47:43.184729  484156 out.go:177] * [ingress-addon-legacy-873808] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0116 02:47:43.186317  484156 out.go:177]   - MINIKUBE_LOCATION=17965
	I0116 02:47:43.186359  484156 notify.go:220] Checking for updates...
	I0116 02:47:43.187927  484156 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 02:47:43.190003  484156 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17965-468241/kubeconfig
	I0116 02:47:43.191688  484156 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17965-468241/.minikube
	I0116 02:47:43.193449  484156 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0116 02:47:43.194974  484156 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0116 02:47:43.196792  484156 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 02:47:43.233230  484156 out.go:177] * Using the kvm2 driver based on user configuration
	I0116 02:47:43.234820  484156 start.go:298] selected driver: kvm2
	I0116 02:47:43.234841  484156 start.go:902] validating driver "kvm2" against <nil>
	I0116 02:47:43.234868  484156 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0116 02:47:43.235604  484156 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 02:47:43.235693  484156 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17965-468241/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0116 02:47:43.251022  484156 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0116 02:47:43.251121  484156 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0116 02:47:43.251368  484156 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0116 02:47:43.251419  484156 cni.go:84] Creating CNI manager for ""
	I0116 02:47:43.251432  484156 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 02:47:43.251441  484156 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0116 02:47:43.251449  484156 start_flags.go:321] config:
	{Name:ingress-addon-legacy-873808 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-873808 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 02:47:43.251593  484156 iso.go:125] acquiring lock: {Name:mk2f3231f0eeeb23816ac363851489181398d1c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 02:47:43.253896  484156 out.go:177] * Starting control plane node ingress-addon-legacy-873808 in cluster ingress-addon-legacy-873808
	I0116 02:47:43.255636  484156 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0116 02:47:43.275514  484156 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I0116 02:47:43.275547  484156 cache.go:56] Caching tarball of preloaded images
	I0116 02:47:43.275729  484156 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0116 02:47:43.277901  484156 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0116 02:47:43.279673  484156 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0116 02:47:43.304770  484156 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4?checksum=md5:0d02e096853189c5b37812b400898e14 -> /home/jenkins/minikube-integration/17965-468241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I0116 02:47:46.753143  484156 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0116 02:47:46.753272  484156 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17965-468241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0116 02:47:47.765908  484156 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I0116 02:47:47.766362  484156 profile.go:148] Saving config to /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/ingress-addon-legacy-873808/config.json ...
	I0116 02:47:47.766410  484156 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/ingress-addon-legacy-873808/config.json: {Name:mk15917da0bcc9845fd8456d86772518237484e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:47:47.766631  484156 start.go:365] acquiring machines lock for ingress-addon-legacy-873808: {Name:mk901e5fae8c1a578d989b520053c709bdbdcf06 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0116 02:47:47.766682  484156 start.go:369] acquired machines lock for "ingress-addon-legacy-873808" in 28.962µs
	I0116 02:47:47.766706  484156 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-873808 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Ku
bernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-873808 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 02:47:47.766789  484156 start.go:125] createHost starting for "" (driver="kvm2")
	I0116 02:47:47.769156  484156 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I0116 02:47:47.769375  484156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:47:47.769423  484156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:47:47.784105  484156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33437
	I0116 02:47:47.784666  484156 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:47:47.785333  484156 main.go:141] libmachine: Using API Version  1
	I0116 02:47:47.785355  484156 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:47:47.785709  484156 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:47:47.785918  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Calling .GetMachineName
	I0116 02:47:47.786047  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Calling .DriverName
	I0116 02:47:47.786196  484156 start.go:159] libmachine.API.Create for "ingress-addon-legacy-873808" (driver="kvm2")
	I0116 02:47:47.786237  484156 client.go:168] LocalClient.Create starting
	I0116 02:47:47.786276  484156 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem
	I0116 02:47:47.786313  484156 main.go:141] libmachine: Decoding PEM data...
	I0116 02:47:47.786337  484156 main.go:141] libmachine: Parsing certificate...
	I0116 02:47:47.786413  484156 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem
	I0116 02:47:47.786442  484156 main.go:141] libmachine: Decoding PEM data...
	I0116 02:47:47.786461  484156 main.go:141] libmachine: Parsing certificate...
	I0116 02:47:47.786490  484156 main.go:141] libmachine: Running pre-create checks...
	I0116 02:47:47.786504  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Calling .PreCreateCheck
	I0116 02:47:47.786814  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Calling .GetConfigRaw
	I0116 02:47:47.787278  484156 main.go:141] libmachine: Creating machine...
	I0116 02:47:47.787301  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Calling .Create
	I0116 02:47:47.787472  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Creating KVM machine...
	I0116 02:47:47.788822  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | found existing default KVM network
	I0116 02:47:47.789569  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | I0116 02:47:47.789423  484179 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f340}
	I0116 02:47:47.798343  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | trying to create private KVM network mk-ingress-addon-legacy-873808 192.168.39.0/24...
	I0116 02:47:47.872540  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | private KVM network mk-ingress-addon-legacy-873808 192.168.39.0/24 created
	I0116 02:47:47.872601  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Setting up store path in /home/jenkins/minikube-integration/17965-468241/.minikube/machines/ingress-addon-legacy-873808 ...
	I0116 02:47:47.872634  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | I0116 02:47:47.872523  484179 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17965-468241/.minikube
	I0116 02:47:47.872654  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Building disk image from file:///home/jenkins/minikube-integration/17965-468241/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso
	I0116 02:47:47.872685  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Downloading /home/jenkins/minikube-integration/17965-468241/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17965-468241/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso...
	I0116 02:47:48.111444  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | I0116 02:47:48.111317  484179 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17965-468241/.minikube/machines/ingress-addon-legacy-873808/id_rsa...
	I0116 02:47:48.368364  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | I0116 02:47:48.368164  484179 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17965-468241/.minikube/machines/ingress-addon-legacy-873808/ingress-addon-legacy-873808.rawdisk...
	I0116 02:47:48.368415  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | Writing magic tar header
	I0116 02:47:48.368440  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | Writing SSH key tar header
	I0116 02:47:48.368455  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | I0116 02:47:48.368329  484179 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17965-468241/.minikube/machines/ingress-addon-legacy-873808 ...
	I0116 02:47:48.368480  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17965-468241/.minikube/machines/ingress-addon-legacy-873808
	I0116 02:47:48.368573  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Setting executable bit set on /home/jenkins/minikube-integration/17965-468241/.minikube/machines/ingress-addon-legacy-873808 (perms=drwx------)
	I0116 02:47:48.368638  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17965-468241/.minikube/machines
	I0116 02:47:48.368663  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Setting executable bit set on /home/jenkins/minikube-integration/17965-468241/.minikube/machines (perms=drwxr-xr-x)
	I0116 02:47:48.368690  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Setting executable bit set on /home/jenkins/minikube-integration/17965-468241/.minikube (perms=drwxr-xr-x)
	I0116 02:47:48.368704  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Setting executable bit set on /home/jenkins/minikube-integration/17965-468241 (perms=drwxrwxr-x)
	I0116 02:47:48.368717  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17965-468241/.minikube
	I0116 02:47:48.368734  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17965-468241
	I0116 02:47:48.368750  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0116 02:47:48.368767  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | Checking permissions on dir: /home/jenkins
	I0116 02:47:48.368777  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | Checking permissions on dir: /home
	I0116 02:47:48.368805  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0116 02:47:48.368835  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | Skipping /home - not owner
	I0116 02:47:48.368846  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0116 02:47:48.368863  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Creating domain...
	I0116 02:47:48.369854  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) define libvirt domain using xml: 
	I0116 02:47:48.369878  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) <domain type='kvm'>
	I0116 02:47:48.369887  484156 main.go:141] libmachine: (ingress-addon-legacy-873808)   <name>ingress-addon-legacy-873808</name>
	I0116 02:47:48.369893  484156 main.go:141] libmachine: (ingress-addon-legacy-873808)   <memory unit='MiB'>4096</memory>
	I0116 02:47:48.369900  484156 main.go:141] libmachine: (ingress-addon-legacy-873808)   <vcpu>2</vcpu>
	I0116 02:47:48.369905  484156 main.go:141] libmachine: (ingress-addon-legacy-873808)   <features>
	I0116 02:47:48.369912  484156 main.go:141] libmachine: (ingress-addon-legacy-873808)     <acpi/>
	I0116 02:47:48.369922  484156 main.go:141] libmachine: (ingress-addon-legacy-873808)     <apic/>
	I0116 02:47:48.369935  484156 main.go:141] libmachine: (ingress-addon-legacy-873808)     <pae/>
	I0116 02:47:48.369947  484156 main.go:141] libmachine: (ingress-addon-legacy-873808)     
	I0116 02:47:48.369960  484156 main.go:141] libmachine: (ingress-addon-legacy-873808)   </features>
	I0116 02:47:48.369969  484156 main.go:141] libmachine: (ingress-addon-legacy-873808)   <cpu mode='host-passthrough'>
	I0116 02:47:48.369977  484156 main.go:141] libmachine: (ingress-addon-legacy-873808)   
	I0116 02:47:48.369985  484156 main.go:141] libmachine: (ingress-addon-legacy-873808)   </cpu>
	I0116 02:47:48.369992  484156 main.go:141] libmachine: (ingress-addon-legacy-873808)   <os>
	I0116 02:47:48.370003  484156 main.go:141] libmachine: (ingress-addon-legacy-873808)     <type>hvm</type>
	I0116 02:47:48.370035  484156 main.go:141] libmachine: (ingress-addon-legacy-873808)     <boot dev='cdrom'/>
	I0116 02:47:48.370062  484156 main.go:141] libmachine: (ingress-addon-legacy-873808)     <boot dev='hd'/>
	I0116 02:47:48.370078  484156 main.go:141] libmachine: (ingress-addon-legacy-873808)     <bootmenu enable='no'/>
	I0116 02:47:48.370091  484156 main.go:141] libmachine: (ingress-addon-legacy-873808)   </os>
	I0116 02:47:48.370104  484156 main.go:141] libmachine: (ingress-addon-legacy-873808)   <devices>
	I0116 02:47:48.370118  484156 main.go:141] libmachine: (ingress-addon-legacy-873808)     <disk type='file' device='cdrom'>
	I0116 02:47:48.370137  484156 main.go:141] libmachine: (ingress-addon-legacy-873808)       <source file='/home/jenkins/minikube-integration/17965-468241/.minikube/machines/ingress-addon-legacy-873808/boot2docker.iso'/>
	I0116 02:47:48.370154  484156 main.go:141] libmachine: (ingress-addon-legacy-873808)       <target dev='hdc' bus='scsi'/>
	I0116 02:47:48.370167  484156 main.go:141] libmachine: (ingress-addon-legacy-873808)       <readonly/>
	I0116 02:47:48.370195  484156 main.go:141] libmachine: (ingress-addon-legacy-873808)     </disk>
	I0116 02:47:48.370218  484156 main.go:141] libmachine: (ingress-addon-legacy-873808)     <disk type='file' device='disk'>
	I0116 02:47:48.370230  484156 main.go:141] libmachine: (ingress-addon-legacy-873808)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0116 02:47:48.370272  484156 main.go:141] libmachine: (ingress-addon-legacy-873808)       <source file='/home/jenkins/minikube-integration/17965-468241/.minikube/machines/ingress-addon-legacy-873808/ingress-addon-legacy-873808.rawdisk'/>
	I0116 02:47:48.370305  484156 main.go:141] libmachine: (ingress-addon-legacy-873808)       <target dev='hda' bus='virtio'/>
	I0116 02:47:48.370321  484156 main.go:141] libmachine: (ingress-addon-legacy-873808)     </disk>
	I0116 02:47:48.370339  484156 main.go:141] libmachine: (ingress-addon-legacy-873808)     <interface type='network'>
	I0116 02:47:48.370358  484156 main.go:141] libmachine: (ingress-addon-legacy-873808)       <source network='mk-ingress-addon-legacy-873808'/>
	I0116 02:47:48.370371  484156 main.go:141] libmachine: (ingress-addon-legacy-873808)       <model type='virtio'/>
	I0116 02:47:48.370385  484156 main.go:141] libmachine: (ingress-addon-legacy-873808)     </interface>
	I0116 02:47:48.370397  484156 main.go:141] libmachine: (ingress-addon-legacy-873808)     <interface type='network'>
	I0116 02:47:48.370406  484156 main.go:141] libmachine: (ingress-addon-legacy-873808)       <source network='default'/>
	I0116 02:47:48.370423  484156 main.go:141] libmachine: (ingress-addon-legacy-873808)       <model type='virtio'/>
	I0116 02:47:48.370438  484156 main.go:141] libmachine: (ingress-addon-legacy-873808)     </interface>
	I0116 02:47:48.370452  484156 main.go:141] libmachine: (ingress-addon-legacy-873808)     <serial type='pty'>
	I0116 02:47:48.370465  484156 main.go:141] libmachine: (ingress-addon-legacy-873808)       <target port='0'/>
	I0116 02:47:48.370478  484156 main.go:141] libmachine: (ingress-addon-legacy-873808)     </serial>
	I0116 02:47:48.370492  484156 main.go:141] libmachine: (ingress-addon-legacy-873808)     <console type='pty'>
	I0116 02:47:48.370515  484156 main.go:141] libmachine: (ingress-addon-legacy-873808)       <target type='serial' port='0'/>
	I0116 02:47:48.370529  484156 main.go:141] libmachine: (ingress-addon-legacy-873808)     </console>
	I0116 02:47:48.370543  484156 main.go:141] libmachine: (ingress-addon-legacy-873808)     <rng model='virtio'>
	I0116 02:47:48.370558  484156 main.go:141] libmachine: (ingress-addon-legacy-873808)       <backend model='random'>/dev/random</backend>
	I0116 02:47:48.370570  484156 main.go:141] libmachine: (ingress-addon-legacy-873808)     </rng>
	I0116 02:47:48.370583  484156 main.go:141] libmachine: (ingress-addon-legacy-873808)     
	I0116 02:47:48.370595  484156 main.go:141] libmachine: (ingress-addon-legacy-873808)     
	I0116 02:47:48.370614  484156 main.go:141] libmachine: (ingress-addon-legacy-873808)   </devices>
	I0116 02:47:48.370631  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) </domain>
	I0116 02:47:48.370648  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) 
	I0116 02:47:48.375045  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | domain ingress-addon-legacy-873808 has defined MAC address 52:54:00:e0:1d:c0 in network default
	I0116 02:47:48.375575  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | domain ingress-addon-legacy-873808 has defined MAC address 52:54:00:f3:e8:86 in network mk-ingress-addon-legacy-873808
	I0116 02:47:48.375597  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Ensuring networks are active...
	I0116 02:47:48.376282  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Ensuring network default is active
	I0116 02:47:48.376506  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Ensuring network mk-ingress-addon-legacy-873808 is active
	I0116 02:47:48.376961  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Getting domain xml...
	I0116 02:47:48.377770  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Creating domain...
	I0116 02:47:48.701789  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Waiting to get IP...
	I0116 02:47:48.702455  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | domain ingress-addon-legacy-873808 has defined MAC address 52:54:00:f3:e8:86 in network mk-ingress-addon-legacy-873808
	I0116 02:47:48.702930  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | unable to find current IP address of domain ingress-addon-legacy-873808 in network mk-ingress-addon-legacy-873808
	I0116 02:47:48.702968  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | I0116 02:47:48.702898  484179 retry.go:31] will retry after 262.778077ms: waiting for machine to come up
	I0116 02:47:48.967659  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | domain ingress-addon-legacy-873808 has defined MAC address 52:54:00:f3:e8:86 in network mk-ingress-addon-legacy-873808
	I0116 02:47:48.968082  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | unable to find current IP address of domain ingress-addon-legacy-873808 in network mk-ingress-addon-legacy-873808
	I0116 02:47:48.968109  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | I0116 02:47:48.968019  484179 retry.go:31] will retry after 243.122154ms: waiting for machine to come up
	I0116 02:47:49.212612  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | domain ingress-addon-legacy-873808 has defined MAC address 52:54:00:f3:e8:86 in network mk-ingress-addon-legacy-873808
	I0116 02:47:49.213056  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | unable to find current IP address of domain ingress-addon-legacy-873808 in network mk-ingress-addon-legacy-873808
	I0116 02:47:49.213085  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | I0116 02:47:49.213005  484179 retry.go:31] will retry after 383.279795ms: waiting for machine to come up
	I0116 02:47:49.598184  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | domain ingress-addon-legacy-873808 has defined MAC address 52:54:00:f3:e8:86 in network mk-ingress-addon-legacy-873808
	I0116 02:47:49.598599  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | unable to find current IP address of domain ingress-addon-legacy-873808 in network mk-ingress-addon-legacy-873808
	I0116 02:47:49.598658  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | I0116 02:47:49.598562  484179 retry.go:31] will retry after 437.110966ms: waiting for machine to come up
	I0116 02:47:50.037195  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | domain ingress-addon-legacy-873808 has defined MAC address 52:54:00:f3:e8:86 in network mk-ingress-addon-legacy-873808
	I0116 02:47:50.037690  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | unable to find current IP address of domain ingress-addon-legacy-873808 in network mk-ingress-addon-legacy-873808
	I0116 02:47:50.037717  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | I0116 02:47:50.037637  484179 retry.go:31] will retry after 479.162126ms: waiting for machine to come up
	I0116 02:47:50.518259  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | domain ingress-addon-legacy-873808 has defined MAC address 52:54:00:f3:e8:86 in network mk-ingress-addon-legacy-873808
	I0116 02:47:50.518799  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | unable to find current IP address of domain ingress-addon-legacy-873808 in network mk-ingress-addon-legacy-873808
	I0116 02:47:50.518830  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | I0116 02:47:50.518752  484179 retry.go:31] will retry after 907.340097ms: waiting for machine to come up
	I0116 02:47:51.427565  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | domain ingress-addon-legacy-873808 has defined MAC address 52:54:00:f3:e8:86 in network mk-ingress-addon-legacy-873808
	I0116 02:47:51.428026  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | unable to find current IP address of domain ingress-addon-legacy-873808 in network mk-ingress-addon-legacy-873808
	I0116 02:47:51.428071  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | I0116 02:47:51.427987  484179 retry.go:31] will retry after 767.746311ms: waiting for machine to come up
	I0116 02:47:52.197023  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | domain ingress-addon-legacy-873808 has defined MAC address 52:54:00:f3:e8:86 in network mk-ingress-addon-legacy-873808
	I0116 02:47:52.197419  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | unable to find current IP address of domain ingress-addon-legacy-873808 in network mk-ingress-addon-legacy-873808
	I0116 02:47:52.197451  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | I0116 02:47:52.197357  484179 retry.go:31] will retry after 1.112666185s: waiting for machine to come up
	I0116 02:47:53.311209  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | domain ingress-addon-legacy-873808 has defined MAC address 52:54:00:f3:e8:86 in network mk-ingress-addon-legacy-873808
	I0116 02:47:53.311691  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | unable to find current IP address of domain ingress-addon-legacy-873808 in network mk-ingress-addon-legacy-873808
	I0116 02:47:53.311722  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | I0116 02:47:53.311647  484179 retry.go:31] will retry after 1.467672763s: waiting for machine to come up
	I0116 02:47:54.781351  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | domain ingress-addon-legacy-873808 has defined MAC address 52:54:00:f3:e8:86 in network mk-ingress-addon-legacy-873808
	I0116 02:47:54.781730  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | unable to find current IP address of domain ingress-addon-legacy-873808 in network mk-ingress-addon-legacy-873808
	I0116 02:47:54.781769  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | I0116 02:47:54.781680  484179 retry.go:31] will retry after 2.256160288s: waiting for machine to come up
	I0116 02:47:57.041101  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | domain ingress-addon-legacy-873808 has defined MAC address 52:54:00:f3:e8:86 in network mk-ingress-addon-legacy-873808
	I0116 02:47:57.041529  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | unable to find current IP address of domain ingress-addon-legacy-873808 in network mk-ingress-addon-legacy-873808
	I0116 02:47:57.041563  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | I0116 02:47:57.041474  484179 retry.go:31] will retry after 2.289434317s: waiting for machine to come up
	I0116 02:47:59.333263  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | domain ingress-addon-legacy-873808 has defined MAC address 52:54:00:f3:e8:86 in network mk-ingress-addon-legacy-873808
	I0116 02:47:59.333641  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | unable to find current IP address of domain ingress-addon-legacy-873808 in network mk-ingress-addon-legacy-873808
	I0116 02:47:59.333671  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | I0116 02:47:59.333589  484179 retry.go:31] will retry after 3.197513404s: waiting for machine to come up
	I0116 02:48:02.535034  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | domain ingress-addon-legacy-873808 has defined MAC address 52:54:00:f3:e8:86 in network mk-ingress-addon-legacy-873808
	I0116 02:48:02.535428  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | unable to find current IP address of domain ingress-addon-legacy-873808 in network mk-ingress-addon-legacy-873808
	I0116 02:48:02.535453  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | I0116 02:48:02.535375  484179 retry.go:31] will retry after 4.211911743s: waiting for machine to come up
	I0116 02:48:06.752196  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | domain ingress-addon-legacy-873808 has defined MAC address 52:54:00:f3:e8:86 in network mk-ingress-addon-legacy-873808
	I0116 02:48:06.752621  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | unable to find current IP address of domain ingress-addon-legacy-873808 in network mk-ingress-addon-legacy-873808
	I0116 02:48:06.752649  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | I0116 02:48:06.752569  484179 retry.go:31] will retry after 4.758557414s: waiting for machine to come up
	I0116 02:48:11.515876  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | domain ingress-addon-legacy-873808 has defined MAC address 52:54:00:f3:e8:86 in network mk-ingress-addon-legacy-873808
	I0116 02:48:11.516330  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Found IP for machine: 192.168.39.242
	I0116 02:48:11.516363  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Reserving static IP address...
	I0116 02:48:11.516380  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | domain ingress-addon-legacy-873808 has current primary IP address 192.168.39.242 and MAC address 52:54:00:f3:e8:86 in network mk-ingress-addon-legacy-873808
	I0116 02:48:11.516749  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | unable to find host DHCP lease matching {name: "ingress-addon-legacy-873808", mac: "52:54:00:f3:e8:86", ip: "192.168.39.242"} in network mk-ingress-addon-legacy-873808
	I0116 02:48:11.599687  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | Getting to WaitForSSH function...
	I0116 02:48:11.599727  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Reserved static IP address: 192.168.39.242
	I0116 02:48:11.599740  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Waiting for SSH to be available...
	I0116 02:48:11.602521  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | domain ingress-addon-legacy-873808 has defined MAC address 52:54:00:f3:e8:86 in network mk-ingress-addon-legacy-873808
	I0116 02:48:11.602985  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:e8:86", ip: ""} in network mk-ingress-addon-legacy-873808: {Iface:virbr1 ExpiryTime:2024-01-16 03:48:03 +0000 UTC Type:0 Mac:52:54:00:f3:e8:86 Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f3:e8:86}
	I0116 02:48:11.603046  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | domain ingress-addon-legacy-873808 has defined IP address 192.168.39.242 and MAC address 52:54:00:f3:e8:86 in network mk-ingress-addon-legacy-873808
	I0116 02:48:11.603073  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | Using SSH client type: external
	I0116 02:48:11.603093  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | Using SSH private key: /home/jenkins/minikube-integration/17965-468241/.minikube/machines/ingress-addon-legacy-873808/id_rsa (-rw-------)
	I0116 02:48:11.603127  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.242 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17965-468241/.minikube/machines/ingress-addon-legacy-873808/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0116 02:48:11.603162  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | About to run SSH command:
	I0116 02:48:11.603231  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | exit 0
	I0116 02:48:11.696769  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | SSH cmd err, output: <nil>: 
	I0116 02:48:11.697151  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) KVM machine creation complete!
	I0116 02:48:11.697481  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Calling .GetConfigRaw
	I0116 02:48:11.698119  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Calling .DriverName
	I0116 02:48:11.698372  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Calling .DriverName
	I0116 02:48:11.698563  484156 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0116 02:48:11.698585  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Calling .GetState
	I0116 02:48:11.700263  484156 main.go:141] libmachine: Detecting operating system of created instance...
	I0116 02:48:11.700283  484156 main.go:141] libmachine: Waiting for SSH to be available...
	I0116 02:48:11.700293  484156 main.go:141] libmachine: Getting to WaitForSSH function...
	I0116 02:48:11.700304  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Calling .GetSSHHostname
	I0116 02:48:11.703286  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | domain ingress-addon-legacy-873808 has defined MAC address 52:54:00:f3:e8:86 in network mk-ingress-addon-legacy-873808
	I0116 02:48:11.703666  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:e8:86", ip: ""} in network mk-ingress-addon-legacy-873808: {Iface:virbr1 ExpiryTime:2024-01-16 03:48:03 +0000 UTC Type:0 Mac:52:54:00:f3:e8:86 Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:ingress-addon-legacy-873808 Clientid:01:52:54:00:f3:e8:86}
	I0116 02:48:11.703699  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | domain ingress-addon-legacy-873808 has defined IP address 192.168.39.242 and MAC address 52:54:00:f3:e8:86 in network mk-ingress-addon-legacy-873808
	I0116 02:48:11.703822  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Calling .GetSSHPort
	I0116 02:48:11.704055  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Calling .GetSSHKeyPath
	I0116 02:48:11.704215  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Calling .GetSSHKeyPath
	I0116 02:48:11.704387  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Calling .GetSSHUsername
	I0116 02:48:11.704567  484156 main.go:141] libmachine: Using SSH client type: native
	I0116 02:48:11.704953  484156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.242 22 <nil> <nil>}
	I0116 02:48:11.704975  484156 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0116 02:48:11.831606  484156 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 02:48:11.831639  484156 main.go:141] libmachine: Detecting the provisioner...
	I0116 02:48:11.831653  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Calling .GetSSHHostname
	I0116 02:48:11.834609  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | domain ingress-addon-legacy-873808 has defined MAC address 52:54:00:f3:e8:86 in network mk-ingress-addon-legacy-873808
	I0116 02:48:11.835115  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:e8:86", ip: ""} in network mk-ingress-addon-legacy-873808: {Iface:virbr1 ExpiryTime:2024-01-16 03:48:03 +0000 UTC Type:0 Mac:52:54:00:f3:e8:86 Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:ingress-addon-legacy-873808 Clientid:01:52:54:00:f3:e8:86}
	I0116 02:48:11.835151  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | domain ingress-addon-legacy-873808 has defined IP address 192.168.39.242 and MAC address 52:54:00:f3:e8:86 in network mk-ingress-addon-legacy-873808
	I0116 02:48:11.835302  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Calling .GetSSHPort
	I0116 02:48:11.835512  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Calling .GetSSHKeyPath
	I0116 02:48:11.835815  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Calling .GetSSHKeyPath
	I0116 02:48:11.836030  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Calling .GetSSHUsername
	I0116 02:48:11.836253  484156 main.go:141] libmachine: Using SSH client type: native
	I0116 02:48:11.836565  484156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.242 22 <nil> <nil>}
	I0116 02:48:11.836577  484156 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0116 02:48:11.961133  484156 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g19d536a-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0116 02:48:11.961207  484156 main.go:141] libmachine: found compatible host: buildroot
	I0116 02:48:11.961215  484156 main.go:141] libmachine: Provisioning with buildroot...
	I0116 02:48:11.961224  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Calling .GetMachineName
	I0116 02:48:11.961548  484156 buildroot.go:166] provisioning hostname "ingress-addon-legacy-873808"
	I0116 02:48:11.961606  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Calling .GetMachineName
	I0116 02:48:11.961805  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Calling .GetSSHHostname
	I0116 02:48:11.964530  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | domain ingress-addon-legacy-873808 has defined MAC address 52:54:00:f3:e8:86 in network mk-ingress-addon-legacy-873808
	I0116 02:48:11.964939  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:e8:86", ip: ""} in network mk-ingress-addon-legacy-873808: {Iface:virbr1 ExpiryTime:2024-01-16 03:48:03 +0000 UTC Type:0 Mac:52:54:00:f3:e8:86 Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:ingress-addon-legacy-873808 Clientid:01:52:54:00:f3:e8:86}
	I0116 02:48:11.964974  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | domain ingress-addon-legacy-873808 has defined IP address 192.168.39.242 and MAC address 52:54:00:f3:e8:86 in network mk-ingress-addon-legacy-873808
	I0116 02:48:11.965179  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Calling .GetSSHPort
	I0116 02:48:11.965414  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Calling .GetSSHKeyPath
	I0116 02:48:11.965601  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Calling .GetSSHKeyPath
	I0116 02:48:11.965750  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Calling .GetSSHUsername
	I0116 02:48:11.965978  484156 main.go:141] libmachine: Using SSH client type: native
	I0116 02:48:11.966304  484156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.242 22 <nil> <nil>}
	I0116 02:48:11.966321  484156 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-873808 && echo "ingress-addon-legacy-873808" | sudo tee /etc/hostname
	I0116 02:48:12.107231  484156 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-873808
	
	I0116 02:48:12.107266  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Calling .GetSSHHostname
	I0116 02:48:12.110278  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | domain ingress-addon-legacy-873808 has defined MAC address 52:54:00:f3:e8:86 in network mk-ingress-addon-legacy-873808
	I0116 02:48:12.110682  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:e8:86", ip: ""} in network mk-ingress-addon-legacy-873808: {Iface:virbr1 ExpiryTime:2024-01-16 03:48:03 +0000 UTC Type:0 Mac:52:54:00:f3:e8:86 Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:ingress-addon-legacy-873808 Clientid:01:52:54:00:f3:e8:86}
	I0116 02:48:12.110725  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | domain ingress-addon-legacy-873808 has defined IP address 192.168.39.242 and MAC address 52:54:00:f3:e8:86 in network mk-ingress-addon-legacy-873808
	I0116 02:48:12.110936  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Calling .GetSSHPort
	I0116 02:48:12.111162  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Calling .GetSSHKeyPath
	I0116 02:48:12.111384  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Calling .GetSSHKeyPath
	I0116 02:48:12.111559  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Calling .GetSSHUsername
	I0116 02:48:12.111765  484156 main.go:141] libmachine: Using SSH client type: native
	I0116 02:48:12.112125  484156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.242 22 <nil> <nil>}
	I0116 02:48:12.112145  484156 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-873808' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-873808/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-873808' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 02:48:12.245257  484156 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 02:48:12.245294  484156 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17965-468241/.minikube CaCertPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17965-468241/.minikube}
	I0116 02:48:12.245361  484156 buildroot.go:174] setting up certificates
	I0116 02:48:12.245373  484156 provision.go:83] configureAuth start
	I0116 02:48:12.245418  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Calling .GetMachineName
	I0116 02:48:12.245765  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Calling .GetIP
	I0116 02:48:12.248902  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | domain ingress-addon-legacy-873808 has defined MAC address 52:54:00:f3:e8:86 in network mk-ingress-addon-legacy-873808
	I0116 02:48:12.249244  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:e8:86", ip: ""} in network mk-ingress-addon-legacy-873808: {Iface:virbr1 ExpiryTime:2024-01-16 03:48:03 +0000 UTC Type:0 Mac:52:54:00:f3:e8:86 Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:ingress-addon-legacy-873808 Clientid:01:52:54:00:f3:e8:86}
	I0116 02:48:12.249267  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | domain ingress-addon-legacy-873808 has defined IP address 192.168.39.242 and MAC address 52:54:00:f3:e8:86 in network mk-ingress-addon-legacy-873808
	I0116 02:48:12.249444  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Calling .GetSSHHostname
	I0116 02:48:12.251867  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | domain ingress-addon-legacy-873808 has defined MAC address 52:54:00:f3:e8:86 in network mk-ingress-addon-legacy-873808
	I0116 02:48:12.252173  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:e8:86", ip: ""} in network mk-ingress-addon-legacy-873808: {Iface:virbr1 ExpiryTime:2024-01-16 03:48:03 +0000 UTC Type:0 Mac:52:54:00:f3:e8:86 Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:ingress-addon-legacy-873808 Clientid:01:52:54:00:f3:e8:86}
	I0116 02:48:12.252202  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | domain ingress-addon-legacy-873808 has defined IP address 192.168.39.242 and MAC address 52:54:00:f3:e8:86 in network mk-ingress-addon-legacy-873808
	I0116 02:48:12.252370  484156 provision.go:138] copyHostCerts
	I0116 02:48:12.252408  484156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem
	I0116 02:48:12.252453  484156 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem, removing ...
	I0116 02:48:12.252465  484156 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem
	I0116 02:48:12.252552  484156 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem (1123 bytes)
	I0116 02:48:12.252655  484156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem
	I0116 02:48:12.252684  484156 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem, removing ...
	I0116 02:48:12.252695  484156 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem
	I0116 02:48:12.252739  484156 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem (1679 bytes)
	I0116 02:48:12.252799  484156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem
	I0116 02:48:12.252821  484156 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem, removing ...
	I0116 02:48:12.252830  484156 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem
	I0116 02:48:12.252860  484156 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem (1078 bytes)
	I0116 02:48:12.252977  484156 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-873808 san=[192.168.39.242 192.168.39.242 localhost 127.0.0.1 minikube ingress-addon-legacy-873808]
	I0116 02:48:12.526359  484156 provision.go:172] copyRemoteCerts
	I0116 02:48:12.526428  484156 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 02:48:12.526457  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Calling .GetSSHHostname
	I0116 02:48:12.529274  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | domain ingress-addon-legacy-873808 has defined MAC address 52:54:00:f3:e8:86 in network mk-ingress-addon-legacy-873808
	I0116 02:48:12.529545  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:e8:86", ip: ""} in network mk-ingress-addon-legacy-873808: {Iface:virbr1 ExpiryTime:2024-01-16 03:48:03 +0000 UTC Type:0 Mac:52:54:00:f3:e8:86 Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:ingress-addon-legacy-873808 Clientid:01:52:54:00:f3:e8:86}
	I0116 02:48:12.529571  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | domain ingress-addon-legacy-873808 has defined IP address 192.168.39.242 and MAC address 52:54:00:f3:e8:86 in network mk-ingress-addon-legacy-873808
	I0116 02:48:12.529702  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Calling .GetSSHPort
	I0116 02:48:12.529885  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Calling .GetSSHKeyPath
	I0116 02:48:12.530075  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Calling .GetSSHUsername
	I0116 02:48:12.530181  484156 sshutil.go:53] new ssh client: &{IP:192.168.39.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/ingress-addon-legacy-873808/id_rsa Username:docker}
	I0116 02:48:12.622276  484156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0116 02:48:12.622363  484156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0116 02:48:12.646700  484156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0116 02:48:12.646783  484156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0116 02:48:12.669670  484156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0116 02:48:12.669764  484156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0116 02:48:12.692754  484156 provision.go:86] duration metric: configureAuth took 447.363779ms
	I0116 02:48:12.692787  484156 buildroot.go:189] setting minikube options for container-runtime
	I0116 02:48:12.693012  484156 config.go:182] Loaded profile config "ingress-addon-legacy-873808": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0116 02:48:12.693144  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Calling .GetSSHHostname
	I0116 02:48:12.695740  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | domain ingress-addon-legacy-873808 has defined MAC address 52:54:00:f3:e8:86 in network mk-ingress-addon-legacy-873808
	I0116 02:48:12.696119  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:e8:86", ip: ""} in network mk-ingress-addon-legacy-873808: {Iface:virbr1 ExpiryTime:2024-01-16 03:48:03 +0000 UTC Type:0 Mac:52:54:00:f3:e8:86 Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:ingress-addon-legacy-873808 Clientid:01:52:54:00:f3:e8:86}
	I0116 02:48:12.696162  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | domain ingress-addon-legacy-873808 has defined IP address 192.168.39.242 and MAC address 52:54:00:f3:e8:86 in network mk-ingress-addon-legacy-873808
	I0116 02:48:12.696307  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Calling .GetSSHPort
	I0116 02:48:12.696538  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Calling .GetSSHKeyPath
	I0116 02:48:12.696745  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Calling .GetSSHKeyPath
	I0116 02:48:12.696882  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Calling .GetSSHUsername
	I0116 02:48:12.697066  484156 main.go:141] libmachine: Using SSH client type: native
	I0116 02:48:12.697390  484156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.242 22 <nil> <nil>}
	I0116 02:48:12.697406  484156 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 02:48:13.031512  484156 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 02:48:13.031540  484156 main.go:141] libmachine: Checking connection to Docker...
	I0116 02:48:13.031549  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Calling .GetURL
	I0116 02:48:13.032946  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | Using libvirt version 6000000
	I0116 02:48:13.035181  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | domain ingress-addon-legacy-873808 has defined MAC address 52:54:00:f3:e8:86 in network mk-ingress-addon-legacy-873808
	I0116 02:48:13.035586  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:e8:86", ip: ""} in network mk-ingress-addon-legacy-873808: {Iface:virbr1 ExpiryTime:2024-01-16 03:48:03 +0000 UTC Type:0 Mac:52:54:00:f3:e8:86 Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:ingress-addon-legacy-873808 Clientid:01:52:54:00:f3:e8:86}
	I0116 02:48:13.035613  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | domain ingress-addon-legacy-873808 has defined IP address 192.168.39.242 and MAC address 52:54:00:f3:e8:86 in network mk-ingress-addon-legacy-873808
	I0116 02:48:13.035771  484156 main.go:141] libmachine: Docker is up and running!
	I0116 02:48:13.035798  484156 main.go:141] libmachine: Reticulating splines...
	I0116 02:48:13.035808  484156 client.go:171] LocalClient.Create took 25.249559378s
	I0116 02:48:13.035843  484156 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-873808" took 25.249646707s
	I0116 02:48:13.035856  484156 start.go:300] post-start starting for "ingress-addon-legacy-873808" (driver="kvm2")
	I0116 02:48:13.035874  484156 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 02:48:13.035902  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Calling .DriverName
	I0116 02:48:13.036191  484156 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 02:48:13.036219  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Calling .GetSSHHostname
	I0116 02:48:13.038340  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | domain ingress-addon-legacy-873808 has defined MAC address 52:54:00:f3:e8:86 in network mk-ingress-addon-legacy-873808
	I0116 02:48:13.038684  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:e8:86", ip: ""} in network mk-ingress-addon-legacy-873808: {Iface:virbr1 ExpiryTime:2024-01-16 03:48:03 +0000 UTC Type:0 Mac:52:54:00:f3:e8:86 Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:ingress-addon-legacy-873808 Clientid:01:52:54:00:f3:e8:86}
	I0116 02:48:13.038719  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | domain ingress-addon-legacy-873808 has defined IP address 192.168.39.242 and MAC address 52:54:00:f3:e8:86 in network mk-ingress-addon-legacy-873808
	I0116 02:48:13.038854  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Calling .GetSSHPort
	I0116 02:48:13.039076  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Calling .GetSSHKeyPath
	I0116 02:48:13.039238  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Calling .GetSSHUsername
	I0116 02:48:13.039385  484156 sshutil.go:53] new ssh client: &{IP:192.168.39.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/ingress-addon-legacy-873808/id_rsa Username:docker}
	I0116 02:48:13.129628  484156 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 02:48:13.134305  484156 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 02:48:13.134333  484156 filesync.go:126] Scanning /home/jenkins/minikube-integration/17965-468241/.minikube/addons for local assets ...
	I0116 02:48:13.134448  484156 filesync.go:126] Scanning /home/jenkins/minikube-integration/17965-468241/.minikube/files for local assets ...
	I0116 02:48:13.134546  484156 filesync.go:149] local asset: /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem -> 4754782.pem in /etc/ssl/certs
	I0116 02:48:13.134558  484156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem -> /etc/ssl/certs/4754782.pem
	I0116 02:48:13.134657  484156 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 02:48:13.143541  484156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem --> /etc/ssl/certs/4754782.pem (1708 bytes)
	I0116 02:48:13.167536  484156 start.go:303] post-start completed in 131.660395ms
	I0116 02:48:13.167602  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Calling .GetConfigRaw
	I0116 02:48:13.168288  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Calling .GetIP
	I0116 02:48:13.170747  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | domain ingress-addon-legacy-873808 has defined MAC address 52:54:00:f3:e8:86 in network mk-ingress-addon-legacy-873808
	I0116 02:48:13.171132  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:e8:86", ip: ""} in network mk-ingress-addon-legacy-873808: {Iface:virbr1 ExpiryTime:2024-01-16 03:48:03 +0000 UTC Type:0 Mac:52:54:00:f3:e8:86 Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:ingress-addon-legacy-873808 Clientid:01:52:54:00:f3:e8:86}
	I0116 02:48:13.171175  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | domain ingress-addon-legacy-873808 has defined IP address 192.168.39.242 and MAC address 52:54:00:f3:e8:86 in network mk-ingress-addon-legacy-873808
	I0116 02:48:13.171358  484156 profile.go:148] Saving config to /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/ingress-addon-legacy-873808/config.json ...
	I0116 02:48:13.171538  484156 start.go:128] duration metric: createHost completed in 25.404737175s
	I0116 02:48:13.171562  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Calling .GetSSHHostname
	I0116 02:48:13.173837  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | domain ingress-addon-legacy-873808 has defined MAC address 52:54:00:f3:e8:86 in network mk-ingress-addon-legacy-873808
	I0116 02:48:13.174195  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:e8:86", ip: ""} in network mk-ingress-addon-legacy-873808: {Iface:virbr1 ExpiryTime:2024-01-16 03:48:03 +0000 UTC Type:0 Mac:52:54:00:f3:e8:86 Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:ingress-addon-legacy-873808 Clientid:01:52:54:00:f3:e8:86}
	I0116 02:48:13.174227  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | domain ingress-addon-legacy-873808 has defined IP address 192.168.39.242 and MAC address 52:54:00:f3:e8:86 in network mk-ingress-addon-legacy-873808
	I0116 02:48:13.174375  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Calling .GetSSHPort
	I0116 02:48:13.174592  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Calling .GetSSHKeyPath
	I0116 02:48:13.174740  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Calling .GetSSHKeyPath
	I0116 02:48:13.174889  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Calling .GetSSHUsername
	I0116 02:48:13.175074  484156 main.go:141] libmachine: Using SSH client type: native
	I0116 02:48:13.175396  484156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.242 22 <nil> <nil>}
	I0116 02:48:13.175407  484156 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0116 02:48:13.300952  484156 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705373293.283889752
	
	I0116 02:48:13.300981  484156 fix.go:206] guest clock: 1705373293.283889752
	I0116 02:48:13.300992  484156 fix.go:219] Guest: 2024-01-16 02:48:13.283889752 +0000 UTC Remote: 2024-01-16 02:48:13.171550345 +0000 UTC m=+30.043997433 (delta=112.339407ms)
	I0116 02:48:13.301012  484156 fix.go:190] guest clock delta is within tolerance: 112.339407ms
	I0116 02:48:13.301017  484156 start.go:83] releasing machines lock for "ingress-addon-legacy-873808", held for 25.534325694s
	I0116 02:48:13.301038  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Calling .DriverName
	I0116 02:48:13.301366  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Calling .GetIP
	I0116 02:48:13.304128  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | domain ingress-addon-legacy-873808 has defined MAC address 52:54:00:f3:e8:86 in network mk-ingress-addon-legacy-873808
	I0116 02:48:13.304473  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:e8:86", ip: ""} in network mk-ingress-addon-legacy-873808: {Iface:virbr1 ExpiryTime:2024-01-16 03:48:03 +0000 UTC Type:0 Mac:52:54:00:f3:e8:86 Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:ingress-addon-legacy-873808 Clientid:01:52:54:00:f3:e8:86}
	I0116 02:48:13.304499  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | domain ingress-addon-legacy-873808 has defined IP address 192.168.39.242 and MAC address 52:54:00:f3:e8:86 in network mk-ingress-addon-legacy-873808
	I0116 02:48:13.304710  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Calling .DriverName
	I0116 02:48:13.305249  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Calling .DriverName
	I0116 02:48:13.305414  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Calling .DriverName
	I0116 02:48:13.305496  484156 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 02:48:13.305546  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Calling .GetSSHHostname
	I0116 02:48:13.305611  484156 ssh_runner.go:195] Run: cat /version.json
	I0116 02:48:13.305645  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Calling .GetSSHHostname
	I0116 02:48:13.308171  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | domain ingress-addon-legacy-873808 has defined MAC address 52:54:00:f3:e8:86 in network mk-ingress-addon-legacy-873808
	I0116 02:48:13.308406  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | domain ingress-addon-legacy-873808 has defined MAC address 52:54:00:f3:e8:86 in network mk-ingress-addon-legacy-873808
	I0116 02:48:13.308555  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:e8:86", ip: ""} in network mk-ingress-addon-legacy-873808: {Iface:virbr1 ExpiryTime:2024-01-16 03:48:03 +0000 UTC Type:0 Mac:52:54:00:f3:e8:86 Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:ingress-addon-legacy-873808 Clientid:01:52:54:00:f3:e8:86}
	I0116 02:48:13.308585  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | domain ingress-addon-legacy-873808 has defined IP address 192.168.39.242 and MAC address 52:54:00:f3:e8:86 in network mk-ingress-addon-legacy-873808
	I0116 02:48:13.308712  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Calling .GetSSHPort
	I0116 02:48:13.308803  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:e8:86", ip: ""} in network mk-ingress-addon-legacy-873808: {Iface:virbr1 ExpiryTime:2024-01-16 03:48:03 +0000 UTC Type:0 Mac:52:54:00:f3:e8:86 Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:ingress-addon-legacy-873808 Clientid:01:52:54:00:f3:e8:86}
	I0116 02:48:13.308836  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | domain ingress-addon-legacy-873808 has defined IP address 192.168.39.242 and MAC address 52:54:00:f3:e8:86 in network mk-ingress-addon-legacy-873808
	I0116 02:48:13.308910  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Calling .GetSSHKeyPath
	I0116 02:48:13.309005  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Calling .GetSSHPort
	I0116 02:48:13.309079  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Calling .GetSSHUsername
	I0116 02:48:13.309150  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Calling .GetSSHKeyPath
	I0116 02:48:13.309224  484156 sshutil.go:53] new ssh client: &{IP:192.168.39.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/ingress-addon-legacy-873808/id_rsa Username:docker}
	I0116 02:48:13.309272  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Calling .GetSSHUsername
	I0116 02:48:13.309392  484156 sshutil.go:53] new ssh client: &{IP:192.168.39.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/ingress-addon-legacy-873808/id_rsa Username:docker}
	I0116 02:48:13.396987  484156 ssh_runner.go:195] Run: systemctl --version
	I0116 02:48:13.420917  484156 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 02:48:13.578847  484156 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0116 02:48:13.586061  484156 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 02:48:13.586157  484156 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 02:48:13.602104  484156 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0116 02:48:13.602142  484156 start.go:475] detecting cgroup driver to use...
	I0116 02:48:13.602226  484156 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 02:48:13.615372  484156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 02:48:13.627668  484156 docker.go:217] disabling cri-docker service (if available) ...
	I0116 02:48:13.627742  484156 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 02:48:13.640115  484156 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 02:48:13.652837  484156 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 02:48:13.767298  484156 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 02:48:13.891720  484156 docker.go:233] disabling docker service ...
	I0116 02:48:13.891816  484156 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 02:48:13.906406  484156 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 02:48:13.919326  484156 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 02:48:14.029060  484156 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 02:48:14.139071  484156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 02:48:14.152490  484156 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 02:48:14.169748  484156 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0116 02:48:14.169822  484156 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 02:48:14.180408  484156 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 02:48:14.180493  484156 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 02:48:14.191142  484156 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 02:48:14.201785  484156 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 02:48:14.212568  484156 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 02:48:14.223749  484156 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 02:48:14.233771  484156 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0116 02:48:14.233852  484156 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0116 02:48:14.248906  484156 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 02:48:14.258601  484156 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 02:48:14.358322  484156 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 02:48:14.528771  484156 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 02:48:14.528848  484156 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 02:48:14.534456  484156 start.go:543] Will wait 60s for crictl version
	I0116 02:48:14.534518  484156 ssh_runner.go:195] Run: which crictl
	I0116 02:48:14.539053  484156 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 02:48:14.589100  484156 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0116 02:48:14.589203  484156 ssh_runner.go:195] Run: crio --version
	I0116 02:48:14.646651  484156 ssh_runner.go:195] Run: crio --version
	I0116 02:48:14.704261  484156 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.1 ...
	I0116 02:48:14.705795  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Calling .GetIP
	I0116 02:48:14.708471  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | domain ingress-addon-legacy-873808 has defined MAC address 52:54:00:f3:e8:86 in network mk-ingress-addon-legacy-873808
	I0116 02:48:14.708862  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:e8:86", ip: ""} in network mk-ingress-addon-legacy-873808: {Iface:virbr1 ExpiryTime:2024-01-16 03:48:03 +0000 UTC Type:0 Mac:52:54:00:f3:e8:86 Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:ingress-addon-legacy-873808 Clientid:01:52:54:00:f3:e8:86}
	I0116 02:48:14.708902  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | domain ingress-addon-legacy-873808 has defined IP address 192.168.39.242 and MAC address 52:54:00:f3:e8:86 in network mk-ingress-addon-legacy-873808
	I0116 02:48:14.709070  484156 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0116 02:48:14.713252  484156 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 02:48:14.728612  484156 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0116 02:48:14.728667  484156 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 02:48:14.763018  484156 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0116 02:48:14.763090  484156 ssh_runner.go:195] Run: which lz4
	I0116 02:48:14.767851  484156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0116 02:48:14.767979  484156 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0116 02:48:14.772254  484156 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0116 02:48:14.772283  484156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (495439307 bytes)
	I0116 02:48:16.688521  484156 crio.go:444] Took 1.920585 seconds to copy over tarball
	I0116 02:48:16.688640  484156 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0116 02:48:19.780053  484156 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.091360373s)
	I0116 02:48:19.780083  484156 crio.go:451] Took 3.091533 seconds to extract the tarball
	I0116 02:48:19.780094  484156 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0116 02:48:19.824427  484156 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 02:48:19.880101  484156 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0116 02:48:19.880129  484156 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0116 02:48:19.880185  484156 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 02:48:19.880229  484156 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0116 02:48:19.880263  484156 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0116 02:48:19.880302  484156 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0116 02:48:19.880340  484156 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0116 02:48:19.880371  484156 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0116 02:48:19.880401  484156 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0116 02:48:19.880344  484156 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0116 02:48:19.881581  484156 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0116 02:48:19.881594  484156 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0116 02:48:19.881591  484156 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0116 02:48:19.881591  484156 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 02:48:19.881591  484156 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0116 02:48:19.881671  484156 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0116 02:48:19.881711  484156 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0116 02:48:19.881849  484156 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0116 02:48:20.042409  484156 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0116 02:48:20.046998  484156 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0116 02:48:20.050830  484156 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I0116 02:48:20.068303  484156 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0116 02:48:20.078057  484156 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0116 02:48:20.083499  484156 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0116 02:48:20.102413  484156 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I0116 02:48:20.102476  484156 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0116 02:48:20.102540  484156 ssh_runner.go:195] Run: which crictl
	I0116 02:48:20.160080  484156 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I0116 02:48:20.160143  484156 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I0116 02:48:20.160189  484156 ssh_runner.go:195] Run: which crictl
	I0116 02:48:20.162295  484156 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0116 02:48:20.169757  484156 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 02:48:20.211702  484156 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I0116 02:48:20.211742  484156 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0116 02:48:20.211794  484156 ssh_runner.go:195] Run: which crictl
	I0116 02:48:20.229010  484156 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I0116 02:48:20.229071  484156 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0116 02:48:20.229133  484156 ssh_runner.go:195] Run: which crictl
	I0116 02:48:20.240792  484156 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I0116 02:48:20.240827  484156 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I0116 02:48:20.240845  484156 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0116 02:48:20.240853  484156 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0116 02:48:20.240861  484156 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0116 02:48:20.240888  484156 ssh_runner.go:195] Run: which crictl
	I0116 02:48:20.240924  484156 ssh_runner.go:195] Run: which crictl
	I0116 02:48:20.240969  484156 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I0116 02:48:20.304947  484156 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0116 02:48:20.305012  484156 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0116 02:48:20.305074  484156 ssh_runner.go:195] Run: which crictl
	I0116 02:48:20.417128  484156 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0116 02:48:20.417220  484156 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I0116 02:48:20.417253  484156 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I0116 02:48:20.417315  484156 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0116 02:48:20.417337  484156 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I0116 02:48:20.417385  484156 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I0116 02:48:20.417404  484156 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0116 02:48:20.499261  484156 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I0116 02:48:20.511049  484156 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0116 02:48:20.538257  484156 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0116 02:48:20.538341  484156 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I0116 02:48:20.538382  484156 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I0116 02:48:20.538427  484156 cache_images.go:92] LoadImages completed in 658.283885ms
	W0116 02:48:20.538515  484156 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20: no such file or directory
	I0116 02:48:20.538599  484156 ssh_runner.go:195] Run: crio config
	I0116 02:48:20.602580  484156 cni.go:84] Creating CNI manager for ""
	I0116 02:48:20.602613  484156 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 02:48:20.602636  484156 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 02:48:20.602661  484156 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.242 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-873808 NodeName:ingress-addon-legacy-873808 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.242"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.242 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cert
s/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0116 02:48:20.602838  484156 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.242
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-873808"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.242
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.242"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 02:48:20.602920  484156 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=ingress-addon-legacy-873808 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.242
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-873808 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0116 02:48:20.602973  484156 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0116 02:48:20.612325  484156 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 02:48:20.612423  484156 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0116 02:48:20.621575  484156 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (436 bytes)
	I0116 02:48:20.638842  484156 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0116 02:48:20.655507  484156 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2129 bytes)
	I0116 02:48:20.672398  484156 ssh_runner.go:195] Run: grep 192.168.39.242	control-plane.minikube.internal$ /etc/hosts
	I0116 02:48:20.676442  484156 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.242	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 02:48:20.689935  484156 certs.go:56] Setting up /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/ingress-addon-legacy-873808 for IP: 192.168.39.242
	I0116 02:48:20.689984  484156 certs.go:190] acquiring lock for shared ca certs: {Name:mkab5aeea3e0ed69f3592dc8f418e07c6075b130 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:48:20.690146  484156 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17965-468241/.minikube/ca.key
	I0116 02:48:20.690202  484156 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.key
	I0116 02:48:20.690258  484156 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/ingress-addon-legacy-873808/client.key
	I0116 02:48:20.690276  484156 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/ingress-addon-legacy-873808/client.crt with IP's: []
	I0116 02:48:20.781115  484156 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/ingress-addon-legacy-873808/client.crt ...
	I0116 02:48:20.781150  484156 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/ingress-addon-legacy-873808/client.crt: {Name:mk934b51b565f7d89061f43a59dbddcc9ccfca20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:48:20.781318  484156 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/ingress-addon-legacy-873808/client.key ...
	I0116 02:48:20.781333  484156 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/ingress-addon-legacy-873808/client.key: {Name:mk348dc3093611729301620cc19cd7a30ca0234c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:48:20.781402  484156 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/ingress-addon-legacy-873808/apiserver.key.d038fca4
	I0116 02:48:20.781422  484156 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/ingress-addon-legacy-873808/apiserver.crt.d038fca4 with IP's: [192.168.39.242 10.96.0.1 127.0.0.1 10.0.0.1]
	I0116 02:48:20.857575  484156 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/ingress-addon-legacy-873808/apiserver.crt.d038fca4 ...
	I0116 02:48:20.857612  484156 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/ingress-addon-legacy-873808/apiserver.crt.d038fca4: {Name:mk7a10a1c0f38e04c3e4e5008006667d11b95964 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:48:20.857788  484156 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/ingress-addon-legacy-873808/apiserver.key.d038fca4 ...
	I0116 02:48:20.857804  484156 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/ingress-addon-legacy-873808/apiserver.key.d038fca4: {Name:mkc3822e14e1bd6947b63581563a09197a798e55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:48:20.857875  484156 certs.go:337] copying /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/ingress-addon-legacy-873808/apiserver.crt.d038fca4 -> /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/ingress-addon-legacy-873808/apiserver.crt
	I0116 02:48:20.857966  484156 certs.go:341] copying /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/ingress-addon-legacy-873808/apiserver.key.d038fca4 -> /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/ingress-addon-legacy-873808/apiserver.key
	I0116 02:48:20.858025  484156 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/ingress-addon-legacy-873808/proxy-client.key
	I0116 02:48:20.858044  484156 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/ingress-addon-legacy-873808/proxy-client.crt with IP's: []
	I0116 02:48:21.027728  484156 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/ingress-addon-legacy-873808/proxy-client.crt ...
	I0116 02:48:21.027765  484156 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/ingress-addon-legacy-873808/proxy-client.crt: {Name:mk253af912dfc129730298ea684d977e5daab88c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:48:21.027932  484156 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/ingress-addon-legacy-873808/proxy-client.key ...
	I0116 02:48:21.027947  484156 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/ingress-addon-legacy-873808/proxy-client.key: {Name:mk8d4581cc5f4272e3d7b4d81eed560ac651a003 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:48:21.028011  484156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/ingress-addon-legacy-873808/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0116 02:48:21.028052  484156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/ingress-addon-legacy-873808/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0116 02:48:21.028075  484156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/ingress-addon-legacy-873808/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0116 02:48:21.028087  484156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/ingress-addon-legacy-873808/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0116 02:48:21.028100  484156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0116 02:48:21.028113  484156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0116 02:48:21.028130  484156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0116 02:48:21.028147  484156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0116 02:48:21.028211  484156 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478.pem (1338 bytes)
	W0116 02:48:21.028257  484156 certs.go:433] ignoring /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478_empty.pem, impossibly tiny 0 bytes
	I0116 02:48:21.028291  484156 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca-key.pem (1679 bytes)
	I0116 02:48:21.028325  484156 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem (1078 bytes)
	I0116 02:48:21.028360  484156 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem (1123 bytes)
	I0116 02:48:21.028394  484156 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem (1679 bytes)
	I0116 02:48:21.028448  484156 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem (1708 bytes)
	I0116 02:48:21.028493  484156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem -> /usr/share/ca-certificates/4754782.pem
	I0116 02:48:21.028511  484156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0116 02:48:21.028528  484156 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478.pem -> /usr/share/ca-certificates/475478.pem
	I0116 02:48:21.029585  484156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/ingress-addon-legacy-873808/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0116 02:48:21.055070  484156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/ingress-addon-legacy-873808/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0116 02:48:21.079685  484156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/ingress-addon-legacy-873808/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0116 02:48:21.105059  484156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/ingress-addon-legacy-873808/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0116 02:48:21.130808  484156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 02:48:21.155312  484156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0116 02:48:21.179428  484156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 02:48:21.203600  484156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 02:48:21.229029  484156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem --> /usr/share/ca-certificates/4754782.pem (1708 bytes)
	I0116 02:48:21.254274  484156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 02:48:21.278582  484156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478.pem --> /usr/share/ca-certificates/475478.pem (1338 bytes)
	I0116 02:48:21.303421  484156 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0116 02:48:21.320185  484156 ssh_runner.go:195] Run: openssl version
	I0116 02:48:21.326192  484156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4754782.pem && ln -fs /usr/share/ca-certificates/4754782.pem /etc/ssl/certs/4754782.pem"
	I0116 02:48:21.336732  484156 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4754782.pem
	I0116 02:48:21.342126  484156 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 02:44 /usr/share/ca-certificates/4754782.pem
	I0116 02:48:21.342210  484156 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4754782.pem
	I0116 02:48:21.348070  484156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4754782.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 02:48:21.358684  484156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 02:48:21.369103  484156 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 02:48:21.374327  484156 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 02:35 /usr/share/ca-certificates/minikubeCA.pem
	I0116 02:48:21.374396  484156 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 02:48:21.380521  484156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 02:48:21.390678  484156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/475478.pem && ln -fs /usr/share/ca-certificates/475478.pem /etc/ssl/certs/475478.pem"
	I0116 02:48:21.400760  484156 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/475478.pem
	I0116 02:48:21.405855  484156 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 02:44 /usr/share/ca-certificates/475478.pem
	I0116 02:48:21.405933  484156 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/475478.pem
	I0116 02:48:21.411747  484156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/475478.pem /etc/ssl/certs/51391683.0"
	I0116 02:48:21.421877  484156 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 02:48:21.426264  484156 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0116 02:48:21.426332  484156 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-873808 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.18.20 ClusterName:ingress-addon-legacy-873808 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.242 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 02:48:21.426426  484156 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0116 02:48:21.426485  484156 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 02:48:21.472555  484156 cri.go:89] found id: ""
	I0116 02:48:21.472663  484156 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0116 02:48:21.481802  484156 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 02:48:21.490985  484156 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 02:48:21.500142  484156 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 02:48:21.500212  484156 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0116 02:48:21.556072  484156 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0116 02:48:21.557107  484156 kubeadm.go:322] [preflight] Running pre-flight checks
	I0116 02:48:21.690290  484156 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0116 02:48:21.690451  484156 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0116 02:48:21.690590  484156 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0116 02:48:21.922692  484156 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0116 02:48:21.923501  484156 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0116 02:48:21.923577  484156 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0116 02:48:22.065833  484156 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0116 02:48:22.069424  484156 out.go:204]   - Generating certificates and keys ...
	I0116 02:48:22.069954  484156 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0116 02:48:22.070038  484156 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0116 02:48:22.242957  484156 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0116 02:48:22.367910  484156 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0116 02:48:22.514931  484156 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0116 02:48:22.708528  484156 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0116 02:48:22.831793  484156 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0116 02:48:22.832448  484156 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-873808 localhost] and IPs [192.168.39.242 127.0.0.1 ::1]
	I0116 02:48:23.050669  484156 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0116 02:48:23.051054  484156 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-873808 localhost] and IPs [192.168.39.242 127.0.0.1 ::1]
	I0116 02:48:23.451959  484156 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0116 02:48:23.760081  484156 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0116 02:48:23.919324  484156 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0116 02:48:23.919544  484156 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0116 02:48:24.074258  484156 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0116 02:48:24.296293  484156 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0116 02:48:24.380481  484156 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0116 02:48:24.531180  484156 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0116 02:48:24.531944  484156 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0116 02:48:24.534246  484156 out.go:204]   - Booting up control plane ...
	I0116 02:48:24.534400  484156 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0116 02:48:24.539607  484156 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0116 02:48:24.544029  484156 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0116 02:48:24.544321  484156 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0116 02:48:24.547248  484156 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0116 02:48:33.550193  484156 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.003333 seconds
	I0116 02:48:33.550348  484156 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0116 02:48:33.566798  484156 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0116 02:48:34.093459  484156 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0116 02:48:34.093676  484156 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-873808 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0116 02:48:34.602829  484156 kubeadm.go:322] [bootstrap-token] Using token: ifbn4n.2r2v2o1q16gwawf0
	I0116 02:48:34.604615  484156 out.go:204]   - Configuring RBAC rules ...
	I0116 02:48:34.604763  484156 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0116 02:48:34.612595  484156 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0116 02:48:34.621127  484156 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0116 02:48:34.633438  484156 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0116 02:48:34.636669  484156 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0116 02:48:34.641337  484156 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0116 02:48:34.670149  484156 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0116 02:48:34.989472  484156 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0116 02:48:35.082767  484156 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0116 02:48:35.083977  484156 kubeadm.go:322] 
	I0116 02:48:35.084073  484156 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0116 02:48:35.084092  484156 kubeadm.go:322] 
	I0116 02:48:35.084158  484156 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0116 02:48:35.084167  484156 kubeadm.go:322] 
	I0116 02:48:35.084195  484156 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0116 02:48:35.084246  484156 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0116 02:48:35.084321  484156 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0116 02:48:35.084329  484156 kubeadm.go:322] 
	I0116 02:48:35.084386  484156 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0116 02:48:35.084482  484156 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0116 02:48:35.084537  484156 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0116 02:48:35.084543  484156 kubeadm.go:322] 
	I0116 02:48:35.084607  484156 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0116 02:48:35.084683  484156 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0116 02:48:35.084691  484156 kubeadm.go:322] 
	I0116 02:48:35.084769  484156 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ifbn4n.2r2v2o1q16gwawf0 \
	I0116 02:48:35.084894  484156 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:ab75761e1119a1709a216b682cd7b1dcce0641115df5f236b54b16e4f66aa044 \
	I0116 02:48:35.084949  484156 kubeadm.go:322]     --control-plane 
	I0116 02:48:35.084974  484156 kubeadm.go:322] 
	I0116 02:48:35.085100  484156 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0116 02:48:35.085115  484156 kubeadm.go:322] 
	I0116 02:48:35.085183  484156 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ifbn4n.2r2v2o1q16gwawf0 \
	I0116 02:48:35.085270  484156 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:ab75761e1119a1709a216b682cd7b1dcce0641115df5f236b54b16e4f66aa044 
	I0116 02:48:35.085930  484156 kubeadm.go:322] W0116 02:48:21.548565     957 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0116 02:48:35.086052  484156 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0116 02:48:35.086188  484156 kubeadm.go:322] W0116 02:48:24.534005     957 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0116 02:48:35.086315  484156 kubeadm.go:322] W0116 02:48:24.535716     957 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0116 02:48:35.086352  484156 cni.go:84] Creating CNI manager for ""
	I0116 02:48:35.086362  484156 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 02:48:35.089272  484156 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 02:48:35.090895  484156 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 02:48:35.120029  484156 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 02:48:35.145251  484156 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0116 02:48:35.145341  484156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:48:35.145340  484156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=6e8fa5f64d0e7272be43ff25ed3826261f0a2578 minikube.k8s.io/name=ingress-addon-legacy-873808 minikube.k8s.io/updated_at=2024_01_16T02_48_35_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:48:35.517765  484156 ops.go:34] apiserver oom_adj: -16
	I0116 02:48:35.517941  484156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:48:36.018339  484156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:48:36.518702  484156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:48:37.018626  484156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:48:37.518911  484156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:48:38.018680  484156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:48:38.518990  484156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:48:39.018275  484156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:48:39.519040  484156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:48:40.018020  484156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:48:40.519044  484156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:48:41.018900  484156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:48:41.518196  484156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:48:42.018364  484156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:48:42.518525  484156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:48:43.018744  484156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:48:43.518580  484156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:48:44.018312  484156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:48:44.518242  484156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:48:45.018232  484156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:48:45.518072  484156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:48:46.018239  484156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:48:46.518918  484156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:48:47.018865  484156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:48:47.518887  484156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:48:48.018808  484156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:48:48.518009  484156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:48:49.018098  484156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:48:49.518338  484156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:48:50.018058  484156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:48:50.518174  484156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:48:50.622017  484156 kubeadm.go:1088] duration metric: took 15.476751711s to wait for elevateKubeSystemPrivileges.
	I0116 02:48:50.622049  484156 kubeadm.go:406] StartCluster complete in 29.195722284s
	I0116 02:48:50.622068  484156 settings.go:142] acquiring lock: {Name:mk3ded99c06d285b174e76622bb0741c8dd2d8fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:48:50.622161  484156 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17965-468241/kubeconfig
	I0116 02:48:50.623046  484156 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-468241/kubeconfig: {Name:mkac3c63bbcba9b4fa1bea3480797757d703fb9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:48:50.623338  484156 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0116 02:48:50.623423  484156 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0116 02:48:50.623514  484156 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-873808"
	I0116 02:48:50.623541  484156 addons.go:234] Setting addon storage-provisioner=true in "ingress-addon-legacy-873808"
	I0116 02:48:50.623553  484156 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-873808"
	I0116 02:48:50.623582  484156 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-873808"
	I0116 02:48:50.623595  484156 host.go:66] Checking if "ingress-addon-legacy-873808" exists ...
	I0116 02:48:50.623680  484156 config.go:182] Loaded profile config "ingress-addon-legacy-873808": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0116 02:48:50.624123  484156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:48:50.624134  484156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:48:50.624162  484156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:48:50.624176  484156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:48:50.624132  484156 kapi.go:59] client config for ingress-addon-legacy-873808: &rest.Config{Host:"https://192.168.39.242:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17965-468241/.minikube/profiles/ingress-addon-legacy-873808/client.crt", KeyFile:"/home/jenkins/minikube-integration/17965-468241/.minikube/profiles/ingress-addon-legacy-873808/client.key", CAFile:"/home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]
uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19be0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0116 02:48:50.624997  484156 cert_rotation.go:137] Starting client certificate rotation controller
	I0116 02:48:50.641535  484156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33371
	I0116 02:48:50.641659  484156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46655
	I0116 02:48:50.642081  484156 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:48:50.642168  484156 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:48:50.642722  484156 main.go:141] libmachine: Using API Version  1
	I0116 02:48:50.642736  484156 main.go:141] libmachine: Using API Version  1
	I0116 02:48:50.642753  484156 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:48:50.642754  484156 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:48:50.643214  484156 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:48:50.643285  484156 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:48:50.643559  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Calling .GetState
	I0116 02:48:50.643872  484156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:48:50.643910  484156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:48:50.646430  484156 kapi.go:59] client config for ingress-addon-legacy-873808: &rest.Config{Host:"https://192.168.39.242:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17965-468241/.minikube/profiles/ingress-addon-legacy-873808/client.crt", KeyFile:"/home/jenkins/minikube-integration/17965-468241/.minikube/profiles/ingress-addon-legacy-873808/client.key", CAFile:"/home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]
uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19be0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0116 02:48:50.646809  484156 addons.go:234] Setting addon default-storageclass=true in "ingress-addon-legacy-873808"
	I0116 02:48:50.646856  484156 host.go:66] Checking if "ingress-addon-legacy-873808" exists ...
	I0116 02:48:50.647374  484156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:48:50.647428  484156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:48:50.660783  484156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39715
	I0116 02:48:50.661386  484156 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:48:50.662067  484156 main.go:141] libmachine: Using API Version  1
	I0116 02:48:50.662097  484156 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:48:50.662528  484156 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:48:50.662767  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Calling .GetState
	I0116 02:48:50.663905  484156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42249
	I0116 02:48:50.664410  484156 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:48:50.664986  484156 main.go:141] libmachine: Using API Version  1
	I0116 02:48:50.665013  484156 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:48:50.665057  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Calling .DriverName
	I0116 02:48:50.667664  484156 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 02:48:50.665366  484156 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:48:50.670545  484156 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 02:48:50.670570  484156 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0116 02:48:50.670602  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Calling .GetSSHHostname
	I0116 02:48:50.671223  484156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:48:50.671285  484156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:48:50.674441  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | domain ingress-addon-legacy-873808 has defined MAC address 52:54:00:f3:e8:86 in network mk-ingress-addon-legacy-873808
	I0116 02:48:50.674955  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:e8:86", ip: ""} in network mk-ingress-addon-legacy-873808: {Iface:virbr1 ExpiryTime:2024-01-16 03:48:03 +0000 UTC Type:0 Mac:52:54:00:f3:e8:86 Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:ingress-addon-legacy-873808 Clientid:01:52:54:00:f3:e8:86}
	I0116 02:48:50.675003  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | domain ingress-addon-legacy-873808 has defined IP address 192.168.39.242 and MAC address 52:54:00:f3:e8:86 in network mk-ingress-addon-legacy-873808
	I0116 02:48:50.675196  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Calling .GetSSHPort
	I0116 02:48:50.675473  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Calling .GetSSHKeyPath
	I0116 02:48:50.675698  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Calling .GetSSHUsername
	I0116 02:48:50.675943  484156 sshutil.go:53] new ssh client: &{IP:192.168.39.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/ingress-addon-legacy-873808/id_rsa Username:docker}
	I0116 02:48:50.688129  484156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34169
	I0116 02:48:50.688624  484156 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:48:50.689133  484156 main.go:141] libmachine: Using API Version  1
	I0116 02:48:50.689163  484156 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:48:50.689560  484156 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:48:50.689780  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Calling .GetState
	I0116 02:48:50.691653  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Calling .DriverName
	I0116 02:48:50.691938  484156 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0116 02:48:50.691956  484156 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0116 02:48:50.691981  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Calling .GetSSHHostname
	I0116 02:48:50.695600  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | domain ingress-addon-legacy-873808 has defined MAC address 52:54:00:f3:e8:86 in network mk-ingress-addon-legacy-873808
	I0116 02:48:50.696098  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:e8:86", ip: ""} in network mk-ingress-addon-legacy-873808: {Iface:virbr1 ExpiryTime:2024-01-16 03:48:03 +0000 UTC Type:0 Mac:52:54:00:f3:e8:86 Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:ingress-addon-legacy-873808 Clientid:01:52:54:00:f3:e8:86}
	I0116 02:48:50.696160  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | domain ingress-addon-legacy-873808 has defined IP address 192.168.39.242 and MAC address 52:54:00:f3:e8:86 in network mk-ingress-addon-legacy-873808
	I0116 02:48:50.696271  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Calling .GetSSHPort
	I0116 02:48:50.696497  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Calling .GetSSHKeyPath
	I0116 02:48:50.696698  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Calling .GetSSHUsername
	I0116 02:48:50.696888  484156 sshutil.go:53] new ssh client: &{IP:192.168.39.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/ingress-addon-legacy-873808/id_rsa Username:docker}
	I0116 02:48:50.844132  484156 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0116 02:48:50.906552  484156 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0116 02:48:50.921094  484156 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 02:48:51.226339  484156 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-873808" context rescaled to 1 replicas
	I0116 02:48:51.226403  484156 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.242 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 02:48:51.229114  484156 out.go:177] * Verifying Kubernetes components...
	I0116 02:48:51.231206  484156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 02:48:51.400559  484156 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0116 02:48:51.434848  484156 main.go:141] libmachine: Making call to close driver server
	I0116 02:48:51.434885  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Calling .Close
	I0116 02:48:51.435198  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | Closing plugin on server side
	I0116 02:48:51.435253  484156 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:48:51.435267  484156 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:48:51.435280  484156 main.go:141] libmachine: Making call to close driver server
	I0116 02:48:51.435293  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Calling .Close
	I0116 02:48:51.435561  484156 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:48:51.435582  484156 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:48:51.435608  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) DBG | Closing plugin on server side
	I0116 02:48:51.445422  484156 main.go:141] libmachine: Making call to close driver server
	I0116 02:48:51.445450  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Calling .Close
	I0116 02:48:51.445765  484156 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:48:51.445791  484156 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:48:51.634059  484156 main.go:141] libmachine: Making call to close driver server
	I0116 02:48:51.634090  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Calling .Close
	I0116 02:48:51.634421  484156 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:48:51.634445  484156 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:48:51.634455  484156 main.go:141] libmachine: Making call to close driver server
	I0116 02:48:51.634464  484156 main.go:141] libmachine: (ingress-addon-legacy-873808) Calling .Close
	I0116 02:48:51.634685  484156 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:48:51.634703  484156 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:48:51.637085  484156 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0116 02:48:51.634942  484156 kapi.go:59] client config for ingress-addon-legacy-873808: &rest.Config{Host:"https://192.168.39.242:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17965-468241/.minikube/profiles/ingress-addon-legacy-873808/client.crt", KeyFile:"/home/jenkins/minikube-integration/17965-468241/.minikube/profiles/ingress-addon-legacy-873808/client.key", CAFile:"/home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]
uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19be0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0116 02:48:51.639088  484156 addons.go:505] enable addons completed in 1.015665967s: enabled=[default-storageclass storage-provisioner]
	I0116 02:48:51.639396  484156 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-873808" to be "Ready" ...
	I0116 02:48:51.652920  484156 node_ready.go:49] node "ingress-addon-legacy-873808" has status "Ready":"True"
	I0116 02:48:51.652950  484156 node_ready.go:38] duration metric: took 13.529095ms waiting for node "ingress-addon-legacy-873808" to be "Ready" ...
	I0116 02:48:51.652962  484156 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 02:48:51.670428  484156 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-hwqvx" in "kube-system" namespace to be "Ready" ...
	I0116 02:48:53.678215  484156 pod_ready.go:102] pod "coredns-66bff467f8-hwqvx" in "kube-system" namespace has status "Ready":"False"
	I0116 02:48:56.178042  484156 pod_ready.go:102] pod "coredns-66bff467f8-hwqvx" in "kube-system" namespace has status "Ready":"False"
	I0116 02:48:58.178524  484156 pod_ready.go:102] pod "coredns-66bff467f8-hwqvx" in "kube-system" namespace has status "Ready":"False"
	I0116 02:49:00.178762  484156 pod_ready.go:102] pod "coredns-66bff467f8-hwqvx" in "kube-system" namespace has status "Ready":"False"
	I0116 02:49:02.179829  484156 pod_ready.go:102] pod "coredns-66bff467f8-hwqvx" in "kube-system" namespace has status "Ready":"False"
	I0116 02:49:02.678842  484156 pod_ready.go:92] pod "coredns-66bff467f8-hwqvx" in "kube-system" namespace has status "Ready":"True"
	I0116 02:49:02.678874  484156 pod_ready.go:81] duration metric: took 11.008412397s waiting for pod "coredns-66bff467f8-hwqvx" in "kube-system" namespace to be "Ready" ...
	I0116 02:49:02.678883  484156 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-873808" in "kube-system" namespace to be "Ready" ...
	I0116 02:49:02.685951  484156 pod_ready.go:92] pod "etcd-ingress-addon-legacy-873808" in "kube-system" namespace has status "Ready":"True"
	I0116 02:49:02.685980  484156 pod_ready.go:81] duration metric: took 7.089373ms waiting for pod "etcd-ingress-addon-legacy-873808" in "kube-system" namespace to be "Ready" ...
	I0116 02:49:02.685989  484156 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-873808" in "kube-system" namespace to be "Ready" ...
	I0116 02:49:02.691421  484156 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-873808" in "kube-system" namespace has status "Ready":"True"
	I0116 02:49:02.691451  484156 pod_ready.go:81] duration metric: took 5.455968ms waiting for pod "kube-apiserver-ingress-addon-legacy-873808" in "kube-system" namespace to be "Ready" ...
	I0116 02:49:02.691461  484156 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-873808" in "kube-system" namespace to be "Ready" ...
	I0116 02:49:02.697217  484156 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-873808" in "kube-system" namespace has status "Ready":"True"
	I0116 02:49:02.697242  484156 pod_ready.go:81] duration metric: took 5.774546ms waiting for pod "kube-controller-manager-ingress-addon-legacy-873808" in "kube-system" namespace to be "Ready" ...
	I0116 02:49:02.697251  484156 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mvnvh" in "kube-system" namespace to be "Ready" ...
	I0116 02:49:02.702624  484156 pod_ready.go:92] pod "kube-proxy-mvnvh" in "kube-system" namespace has status "Ready":"True"
	I0116 02:49:02.702655  484156 pod_ready.go:81] duration metric: took 5.397088ms waiting for pod "kube-proxy-mvnvh" in "kube-system" namespace to be "Ready" ...
	I0116 02:49:02.702667  484156 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-873808" in "kube-system" namespace to be "Ready" ...
	I0116 02:49:02.872116  484156 request.go:629] Waited for 169.369548ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.242:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-873808
	I0116 02:49:03.071257  484156 request.go:629] Waited for 195.310627ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.242:8443/api/v1/nodes/ingress-addon-legacy-873808
	I0116 02:49:03.075105  484156 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-873808" in "kube-system" namespace has status "Ready":"True"
	I0116 02:49:03.075133  484156 pod_ready.go:81] duration metric: took 372.456512ms waiting for pod "kube-scheduler-ingress-addon-legacy-873808" in "kube-system" namespace to be "Ready" ...
	I0116 02:49:03.075147  484156 pod_ready.go:38] duration metric: took 11.422176031s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 02:49:03.075164  484156 api_server.go:52] waiting for apiserver process to appear ...
	I0116 02:49:03.075235  484156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 02:49:03.089262  484156 api_server.go:72] duration metric: took 11.862814994s to wait for apiserver process to appear ...
	I0116 02:49:03.089298  484156 api_server.go:88] waiting for apiserver healthz status ...
	I0116 02:49:03.089324  484156 api_server.go:253] Checking apiserver healthz at https://192.168.39.242:8443/healthz ...
	I0116 02:49:03.095807  484156 api_server.go:279] https://192.168.39.242:8443/healthz returned 200:
	ok
	I0116 02:49:03.096883  484156 api_server.go:141] control plane version: v1.18.20
	I0116 02:49:03.096910  484156 api_server.go:131] duration metric: took 7.604311ms to wait for apiserver health ...
	I0116 02:49:03.096918  484156 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 02:49:03.271568  484156 request.go:629] Waited for 174.559842ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.242:8443/api/v1/namespaces/kube-system/pods
	I0116 02:49:03.277492  484156 system_pods.go:59] 7 kube-system pods found
	I0116 02:49:03.277523  484156 system_pods.go:61] "coredns-66bff467f8-hwqvx" [ab64aaef-3aef-4b36-b7d4-3ad702e17718] Running
	I0116 02:49:03.277528  484156 system_pods.go:61] "etcd-ingress-addon-legacy-873808" [3963bc37-1006-4dbc-9d12-489318de22c9] Running
	I0116 02:49:03.277533  484156 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-873808" [f183fc79-1793-420d-92a1-5143529b2d35] Running
	I0116 02:49:03.277537  484156 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-873808" [d8b887ef-128c-41ad-92e3-bccf5e508d10] Running
	I0116 02:49:03.277541  484156 system_pods.go:61] "kube-proxy-mvnvh" [4184569d-f5d0-40b1-802b-cd1f558304d3] Running
	I0116 02:49:03.277545  484156 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-873808" [55b7bf41-8bae-4701-abf5-64d51cca55a4] Running
	I0116 02:49:03.277548  484156 system_pods.go:61] "storage-provisioner" [7cd7c29b-22f5-4ea4-aea0-a432b5118d7d] Running
	I0116 02:49:03.277557  484156 system_pods.go:74] duration metric: took 180.630711ms to wait for pod list to return data ...
	I0116 02:49:03.277565  484156 default_sa.go:34] waiting for default service account to be created ...
	I0116 02:49:03.472105  484156 request.go:629] Waited for 194.404151ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.242:8443/api/v1/namespaces/default/serviceaccounts
	I0116 02:49:03.475289  484156 default_sa.go:45] found service account: "default"
	I0116 02:49:03.475320  484156 default_sa.go:55] duration metric: took 197.746795ms for default service account to be created ...
	I0116 02:49:03.475330  484156 system_pods.go:116] waiting for k8s-apps to be running ...
	I0116 02:49:03.671839  484156 request.go:629] Waited for 196.434793ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.242:8443/api/v1/namespaces/kube-system/pods
	I0116 02:49:03.677737  484156 system_pods.go:86] 7 kube-system pods found
	I0116 02:49:03.677771  484156 system_pods.go:89] "coredns-66bff467f8-hwqvx" [ab64aaef-3aef-4b36-b7d4-3ad702e17718] Running
	I0116 02:49:03.677785  484156 system_pods.go:89] "etcd-ingress-addon-legacy-873808" [3963bc37-1006-4dbc-9d12-489318de22c9] Running
	I0116 02:49:03.677791  484156 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-873808" [f183fc79-1793-420d-92a1-5143529b2d35] Running
	I0116 02:49:03.677802  484156 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-873808" [d8b887ef-128c-41ad-92e3-bccf5e508d10] Running
	I0116 02:49:03.677808  484156 system_pods.go:89] "kube-proxy-mvnvh" [4184569d-f5d0-40b1-802b-cd1f558304d3] Running
	I0116 02:49:03.677814  484156 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-873808" [55b7bf41-8bae-4701-abf5-64d51cca55a4] Running
	I0116 02:49:03.677819  484156 system_pods.go:89] "storage-provisioner" [7cd7c29b-22f5-4ea4-aea0-a432b5118d7d] Running
	I0116 02:49:03.677828  484156 system_pods.go:126] duration metric: took 202.492217ms to wait for k8s-apps to be running ...
	I0116 02:49:03.677838  484156 system_svc.go:44] waiting for kubelet service to be running ....
	I0116 02:49:03.677895  484156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 02:49:03.692441  484156 system_svc.go:56] duration metric: took 14.588614ms WaitForService to wait for kubelet.
	I0116 02:49:03.692481  484156 kubeadm.go:581] duration metric: took 12.466042205s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0116 02:49:03.692511  484156 node_conditions.go:102] verifying NodePressure condition ...
	I0116 02:49:03.872065  484156 request.go:629] Waited for 179.41204ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.242:8443/api/v1/nodes
	I0116 02:49:03.876771  484156 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 02:49:03.876817  484156 node_conditions.go:123] node cpu capacity is 2
	I0116 02:49:03.876831  484156 node_conditions.go:105] duration metric: took 184.314236ms to run NodePressure ...
	I0116 02:49:03.876843  484156 start.go:228] waiting for startup goroutines ...
	I0116 02:49:03.876857  484156 start.go:233] waiting for cluster config update ...
	I0116 02:49:03.876870  484156 start.go:242] writing updated cluster config ...
	I0116 02:49:03.877206  484156 ssh_runner.go:195] Run: rm -f paused
	I0116 02:49:03.929047  484156 start.go:600] kubectl: 1.29.0, cluster: 1.18.20 (minor skew: 11)
	I0116 02:49:03.931792  484156 out.go:177] 
	W0116 02:49:03.933737  484156 out.go:239] ! /usr/local/bin/kubectl is version 1.29.0, which may have incompatibilities with Kubernetes 1.18.20.
	I0116 02:49:03.935331  484156 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0116 02:49:03.936894  484156 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-873808" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Tue 2024-01-16 02:47:59 UTC, ends at Tue 2024-01-16 02:52:04 UTC. --
	Jan 16 02:52:03 ingress-addon-legacy-873808 crio[719]: time="2024-01-16 02:52:03.927969056Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705373523927953902,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:202825,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=9a58c3f3-58bf-4638-a5d7-d44267cb1545 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 02:52:03 ingress-addon-legacy-873808 crio[719]: time="2024-01-16 02:52:03.928598724Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=759b364e-db83-4718-831e-19fdf78e18cb name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 02:52:03 ingress-addon-legacy-873808 crio[719]: time="2024-01-16 02:52:03.928645462Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=759b364e-db83-4718-831e-19fdf78e18cb name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 02:52:03 ingress-addon-legacy-873808 crio[719]: time="2024-01-16 02:52:03.928902366Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4785c0b68835ccf0f32dfdee9b6ba9546835387705a3914cec82748a8a9dfed8,PodSandboxId:caf1e7744a8e4ca6d6378856d2943aa4120c082f6cdbc6bc0cc10702a3bb9366,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1705373513144948823,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-dnv7g,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e8e50dcc-5619-4e0f-8c16-e525902168ef,},Annotations:map[string]string{io.kubernetes.container.hash: 9f5c718,io.kub
ernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f69e4441a726e60743f3b455745e4e6c6f864fed3b53b698ff1c60e6aee1149,PodSandboxId:d153845d50615e2faf0233e9f874ca617772b2d01633ff7cd893dcf3cf5d5151,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,State:CONTAINER_RUNNING,CreatedAt:1705373371257386126,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6170b55d-0a92-4ef9-8ac8-5b5318785c81,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 9b5d2d4,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e648e5936de4fb8862b826a6abd299ffb3f2033209fa8d9253220af2dc60a976,PodSandboxId:e75ebfda94432c06cf770243fcd36f373da1720f0050183548c20fe84fc0b967,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1705373356411548434,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-hvvjf,io.k
ubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5388ae77-8a82-4a73-b7d7-62af816e0395,},Annotations:map[string]string{io.kubernetes.container.hash: 6148bbd,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:d3131909becfd478e2b8ef744f7fd4a01252b2de57f26ba8651e2eb022f7e99b,PodSandboxId:ce665cd6b45ba83fe2fdd2a49b7fc5863ce1920a8aa5a38573aa861ca7f31c99,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58
fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1705373347252699563,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-249cr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: be7fe168-c1dd-4dbf-9a5f-f1e45b96d2d1,},Annotations:map[string]string{io.kubernetes.container.hash: 27666c5d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b40bfa7e518c1c5b1173ab9fe9bae4f97a875bc924bbd3ae907d46b40677a33f,PodSandboxId:3528caf38c5a9f98010cb1093a4890fe663306c550249fe72252cc1da50ce57e,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-ce
rtgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1705373347086965973,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-x9hc5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ce1407d2-4586-4a61-88bd-65a4deabf7a9,},Annotations:map[string]string{io.kubernetes.container.hash: 1141e1bd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a3048f8f47474d556463086cb9a7516f456656003e40898fa488cbb34e6cabb,PodSandboxId:8d8f5d854e2b9a219406e51052d71412da0d84c27c43906eef27f41a92940943,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&
ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705373332721089260,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cd7c29b-22f5-4ea4-aea0-a432b5118d7d,},Annotations:map[string]string{io.kubernetes.container.hash: 8697f819,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bfd91393e3a691d4826d9e12499723bc15e506075a22a6de44f4aee4794d934,PodSandboxId:e2f1fb5671ec3e66ffecbe8471cca6d45f0a6b93322989a7397eeb2b86978904,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Ima
ge:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1705373332161458634,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-hwqvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab64aaef-3aef-4b36-b7d4-3ad702e17718,},Annotations:map[string]string{io.kubernetes.container.hash: 7574c764,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9daaa37842805ade4aa44595ed72a9
5928a617d3984d8d059aa77ea8d43cabb,PodSandboxId:ab75c1392e06ae02f948625a3eb75c6fa10cd62bdb14d4953112d30aee7b5695,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1705373331805979717,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mvnvh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4184569d-f5d0-40b1-802b-cd1f558304d3,},Annotations:map[string]string{io.kubernetes.container.hash: 4fe6bb30,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bda79e72af777eb3a141d27f7cee637b74b98cbd4a2ead1225f4f21f78e3c25,PodSan
dboxId:13d6eb184941632275e69572caff09f7d5c366482459c80a1d15d7def4bf55c2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1705373307849487060,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-873808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f72c4b0405a72fa57cb4b7e8ce6fab22,},Annotations:map[string]string{io.kubernetes.container.hash: ea6512b3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a588da4f60d930351e119b0fdf936cd7f03ef1fb330a9a9bf5af2b478fed08ed,PodSandboxId:4e122a0c30b55dd8efc42e8deec973b33efe002
cbb3e4a44848039092ccfe7bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1705373306632324492,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-873808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59cfc9c0af1108127d65b437ff5c367bf61527e4379ee10d9dc747599551f0fb,PodSandboxId:41ecb04315f7c2318fffbcfcace0dcd4f0b46cfac1f55
6419ba8cfc167407a79,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1705373306478293034,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-873808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2778eb9f038ed007254b179ee8a954a,},Annotations:map[string]string{io.kubernetes.container.hash: 5d5e5099,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9484c47348031b007cc06d638e27434fe72df9a290be6952bf1dc77ed7eede63,PodSandboxId:d92cd78814b137d235c8f944b9b11aee09964ae77d658900b3a
f8cbc9f5be70d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1705373306310643441,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-873808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=759b364e-db83-4718-831e-19fdf78e18cb name=/runtime.v1.RuntimeServic
e/ListContainers
	Jan 16 02:52:03 ingress-addon-legacy-873808 crio[719]: time="2024-01-16 02:52:03.970656686Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=9787f208-0a13-45f1-be01-6fdacadbc986 name=/runtime.v1.RuntimeService/Version
	Jan 16 02:52:03 ingress-addon-legacy-873808 crio[719]: time="2024-01-16 02:52:03.970721798Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=9787f208-0a13-45f1-be01-6fdacadbc986 name=/runtime.v1.RuntimeService/Version
	Jan 16 02:52:03 ingress-addon-legacy-873808 crio[719]: time="2024-01-16 02:52:03.971961883Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=9898ca68-8323-40a1-b6a2-e435bfa80e23 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 02:52:03 ingress-addon-legacy-873808 crio[719]: time="2024-01-16 02:52:03.972596619Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705373523972578742,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:202825,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=9898ca68-8323-40a1-b6a2-e435bfa80e23 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 02:52:03 ingress-addon-legacy-873808 crio[719]: time="2024-01-16 02:52:03.973287207Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c016a733-b3fc-43d0-80f1-4a64b680130e name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 02:52:03 ingress-addon-legacy-873808 crio[719]: time="2024-01-16 02:52:03.973338186Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c016a733-b3fc-43d0-80f1-4a64b680130e name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 02:52:03 ingress-addon-legacy-873808 crio[719]: time="2024-01-16 02:52:03.973614586Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4785c0b68835ccf0f32dfdee9b6ba9546835387705a3914cec82748a8a9dfed8,PodSandboxId:caf1e7744a8e4ca6d6378856d2943aa4120c082f6cdbc6bc0cc10702a3bb9366,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1705373513144948823,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-dnv7g,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e8e50dcc-5619-4e0f-8c16-e525902168ef,},Annotations:map[string]string{io.kubernetes.container.hash: 9f5c718,io.kub
ernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f69e4441a726e60743f3b455745e4e6c6f864fed3b53b698ff1c60e6aee1149,PodSandboxId:d153845d50615e2faf0233e9f874ca617772b2d01633ff7cd893dcf3cf5d5151,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,State:CONTAINER_RUNNING,CreatedAt:1705373371257386126,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6170b55d-0a92-4ef9-8ac8-5b5318785c81,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 9b5d2d4,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e648e5936de4fb8862b826a6abd299ffb3f2033209fa8d9253220af2dc60a976,PodSandboxId:e75ebfda94432c06cf770243fcd36f373da1720f0050183548c20fe84fc0b967,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1705373356411548434,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-hvvjf,io.k
ubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5388ae77-8a82-4a73-b7d7-62af816e0395,},Annotations:map[string]string{io.kubernetes.container.hash: 6148bbd,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:d3131909becfd478e2b8ef744f7fd4a01252b2de57f26ba8651e2eb022f7e99b,PodSandboxId:ce665cd6b45ba83fe2fdd2a49b7fc5863ce1920a8aa5a38573aa861ca7f31c99,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58
fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1705373347252699563,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-249cr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: be7fe168-c1dd-4dbf-9a5f-f1e45b96d2d1,},Annotations:map[string]string{io.kubernetes.container.hash: 27666c5d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b40bfa7e518c1c5b1173ab9fe9bae4f97a875bc924bbd3ae907d46b40677a33f,PodSandboxId:3528caf38c5a9f98010cb1093a4890fe663306c550249fe72252cc1da50ce57e,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-ce
rtgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1705373347086965973,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-x9hc5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ce1407d2-4586-4a61-88bd-65a4deabf7a9,},Annotations:map[string]string{io.kubernetes.container.hash: 1141e1bd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a3048f8f47474d556463086cb9a7516f456656003e40898fa488cbb34e6cabb,PodSandboxId:8d8f5d854e2b9a219406e51052d71412da0d84c27c43906eef27f41a92940943,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&
ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705373332721089260,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cd7c29b-22f5-4ea4-aea0-a432b5118d7d,},Annotations:map[string]string{io.kubernetes.container.hash: 8697f819,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bfd91393e3a691d4826d9e12499723bc15e506075a22a6de44f4aee4794d934,PodSandboxId:e2f1fb5671ec3e66ffecbe8471cca6d45f0a6b93322989a7397eeb2b86978904,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Ima
ge:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1705373332161458634,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-hwqvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab64aaef-3aef-4b36-b7d4-3ad702e17718,},Annotations:map[string]string{io.kubernetes.container.hash: 7574c764,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9daaa37842805ade4aa44595ed72a9
5928a617d3984d8d059aa77ea8d43cabb,PodSandboxId:ab75c1392e06ae02f948625a3eb75c6fa10cd62bdb14d4953112d30aee7b5695,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1705373331805979717,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mvnvh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4184569d-f5d0-40b1-802b-cd1f558304d3,},Annotations:map[string]string{io.kubernetes.container.hash: 4fe6bb30,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bda79e72af777eb3a141d27f7cee637b74b98cbd4a2ead1225f4f21f78e3c25,PodSan
dboxId:13d6eb184941632275e69572caff09f7d5c366482459c80a1d15d7def4bf55c2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1705373307849487060,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-873808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f72c4b0405a72fa57cb4b7e8ce6fab22,},Annotations:map[string]string{io.kubernetes.container.hash: ea6512b3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a588da4f60d930351e119b0fdf936cd7f03ef1fb330a9a9bf5af2b478fed08ed,PodSandboxId:4e122a0c30b55dd8efc42e8deec973b33efe002
cbb3e4a44848039092ccfe7bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1705373306632324492,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-873808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59cfc9c0af1108127d65b437ff5c367bf61527e4379ee10d9dc747599551f0fb,PodSandboxId:41ecb04315f7c2318fffbcfcace0dcd4f0b46cfac1f55
6419ba8cfc167407a79,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1705373306478293034,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-873808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2778eb9f038ed007254b179ee8a954a,},Annotations:map[string]string{io.kubernetes.container.hash: 5d5e5099,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9484c47348031b007cc06d638e27434fe72df9a290be6952bf1dc77ed7eede63,PodSandboxId:d92cd78814b137d235c8f944b9b11aee09964ae77d658900b3a
f8cbc9f5be70d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1705373306310643441,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-873808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c016a733-b3fc-43d0-80f1-4a64b680130e name=/runtime.v1.RuntimeServic
e/ListContainers
	Jan 16 02:52:04 ingress-addon-legacy-873808 crio[719]: time="2024-01-16 02:52:04.019812110Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=7eda677f-bb61-4414-a1d4-a8f59e1753ff name=/runtime.v1.RuntimeService/Version
	Jan 16 02:52:04 ingress-addon-legacy-873808 crio[719]: time="2024-01-16 02:52:04.019903048Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=7eda677f-bb61-4414-a1d4-a8f59e1753ff name=/runtime.v1.RuntimeService/Version
	Jan 16 02:52:04 ingress-addon-legacy-873808 crio[719]: time="2024-01-16 02:52:04.021471991Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=72051390-7739-434a-9c48-77e6f78a30ea name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 02:52:04 ingress-addon-legacy-873808 crio[719]: time="2024-01-16 02:52:04.021956017Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705373524021939644,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:202825,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=72051390-7739-434a-9c48-77e6f78a30ea name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 02:52:04 ingress-addon-legacy-873808 crio[719]: time="2024-01-16 02:52:04.022753305Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=be65e284-9d2b-4310-9229-f767c300acfe name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 02:52:04 ingress-addon-legacy-873808 crio[719]: time="2024-01-16 02:52:04.022807146Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=be65e284-9d2b-4310-9229-f767c300acfe name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 02:52:04 ingress-addon-legacy-873808 crio[719]: time="2024-01-16 02:52:04.023092381Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4785c0b68835ccf0f32dfdee9b6ba9546835387705a3914cec82748a8a9dfed8,PodSandboxId:caf1e7744a8e4ca6d6378856d2943aa4120c082f6cdbc6bc0cc10702a3bb9366,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1705373513144948823,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-dnv7g,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e8e50dcc-5619-4e0f-8c16-e525902168ef,},Annotations:map[string]string{io.kubernetes.container.hash: 9f5c718,io.kub
ernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f69e4441a726e60743f3b455745e4e6c6f864fed3b53b698ff1c60e6aee1149,PodSandboxId:d153845d50615e2faf0233e9f874ca617772b2d01633ff7cd893dcf3cf5d5151,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,State:CONTAINER_RUNNING,CreatedAt:1705373371257386126,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6170b55d-0a92-4ef9-8ac8-5b5318785c81,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 9b5d2d4,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e648e5936de4fb8862b826a6abd299ffb3f2033209fa8d9253220af2dc60a976,PodSandboxId:e75ebfda94432c06cf770243fcd36f373da1720f0050183548c20fe84fc0b967,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1705373356411548434,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-hvvjf,io.k
ubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5388ae77-8a82-4a73-b7d7-62af816e0395,},Annotations:map[string]string{io.kubernetes.container.hash: 6148bbd,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:d3131909becfd478e2b8ef744f7fd4a01252b2de57f26ba8651e2eb022f7e99b,PodSandboxId:ce665cd6b45ba83fe2fdd2a49b7fc5863ce1920a8aa5a38573aa861ca7f31c99,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58
fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1705373347252699563,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-249cr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: be7fe168-c1dd-4dbf-9a5f-f1e45b96d2d1,},Annotations:map[string]string{io.kubernetes.container.hash: 27666c5d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b40bfa7e518c1c5b1173ab9fe9bae4f97a875bc924bbd3ae907d46b40677a33f,PodSandboxId:3528caf38c5a9f98010cb1093a4890fe663306c550249fe72252cc1da50ce57e,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-ce
rtgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1705373347086965973,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-x9hc5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ce1407d2-4586-4a61-88bd-65a4deabf7a9,},Annotations:map[string]string{io.kubernetes.container.hash: 1141e1bd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a3048f8f47474d556463086cb9a7516f456656003e40898fa488cbb34e6cabb,PodSandboxId:8d8f5d854e2b9a219406e51052d71412da0d84c27c43906eef27f41a92940943,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&
ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705373332721089260,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cd7c29b-22f5-4ea4-aea0-a432b5118d7d,},Annotations:map[string]string{io.kubernetes.container.hash: 8697f819,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bfd91393e3a691d4826d9e12499723bc15e506075a22a6de44f4aee4794d934,PodSandboxId:e2f1fb5671ec3e66ffecbe8471cca6d45f0a6b93322989a7397eeb2b86978904,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Ima
ge:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1705373332161458634,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-hwqvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab64aaef-3aef-4b36-b7d4-3ad702e17718,},Annotations:map[string]string{io.kubernetes.container.hash: 7574c764,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9daaa37842805ade4aa44595ed72a9
5928a617d3984d8d059aa77ea8d43cabb,PodSandboxId:ab75c1392e06ae02f948625a3eb75c6fa10cd62bdb14d4953112d30aee7b5695,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1705373331805979717,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mvnvh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4184569d-f5d0-40b1-802b-cd1f558304d3,},Annotations:map[string]string{io.kubernetes.container.hash: 4fe6bb30,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bda79e72af777eb3a141d27f7cee637b74b98cbd4a2ead1225f4f21f78e3c25,PodSan
dboxId:13d6eb184941632275e69572caff09f7d5c366482459c80a1d15d7def4bf55c2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1705373307849487060,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-873808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f72c4b0405a72fa57cb4b7e8ce6fab22,},Annotations:map[string]string{io.kubernetes.container.hash: ea6512b3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a588da4f60d930351e119b0fdf936cd7f03ef1fb330a9a9bf5af2b478fed08ed,PodSandboxId:4e122a0c30b55dd8efc42e8deec973b33efe002
cbb3e4a44848039092ccfe7bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1705373306632324492,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-873808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59cfc9c0af1108127d65b437ff5c367bf61527e4379ee10d9dc747599551f0fb,PodSandboxId:41ecb04315f7c2318fffbcfcace0dcd4f0b46cfac1f55
6419ba8cfc167407a79,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1705373306478293034,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-873808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2778eb9f038ed007254b179ee8a954a,},Annotations:map[string]string{io.kubernetes.container.hash: 5d5e5099,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9484c47348031b007cc06d638e27434fe72df9a290be6952bf1dc77ed7eede63,PodSandboxId:d92cd78814b137d235c8f944b9b11aee09964ae77d658900b3a
f8cbc9f5be70d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1705373306310643441,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-873808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=be65e284-9d2b-4310-9229-f767c300acfe name=/runtime.v1.RuntimeServic
e/ListContainers
	Jan 16 02:52:04 ingress-addon-legacy-873808 crio[719]: time="2024-01-16 02:52:04.060784103Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=ca7dfa07-380c-48c6-b3af-ed6b79a577f2 name=/runtime.v1.RuntimeService/Version
	Jan 16 02:52:04 ingress-addon-legacy-873808 crio[719]: time="2024-01-16 02:52:04.060850276Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=ca7dfa07-380c-48c6-b3af-ed6b79a577f2 name=/runtime.v1.RuntimeService/Version
	Jan 16 02:52:04 ingress-addon-legacy-873808 crio[719]: time="2024-01-16 02:52:04.062328702Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=b37b58cf-b6e3-47e4-a151-9a0c89942ed3 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 02:52:04 ingress-addon-legacy-873808 crio[719]: time="2024-01-16 02:52:04.062801128Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705373524062789274,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:202825,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=b37b58cf-b6e3-47e4-a151-9a0c89942ed3 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 02:52:04 ingress-addon-legacy-873808 crio[719]: time="2024-01-16 02:52:04.063378879Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e4901815-7cde-4317-bf4b-2828416b6637 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 02:52:04 ingress-addon-legacy-873808 crio[719]: time="2024-01-16 02:52:04.063427347Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e4901815-7cde-4317-bf4b-2828416b6637 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 02:52:04 ingress-addon-legacy-873808 crio[719]: time="2024-01-16 02:52:04.067446675Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4785c0b68835ccf0f32dfdee9b6ba9546835387705a3914cec82748a8a9dfed8,PodSandboxId:caf1e7744a8e4ca6d6378856d2943aa4120c082f6cdbc6bc0cc10702a3bb9366,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1705373513144948823,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-dnv7g,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e8e50dcc-5619-4e0f-8c16-e525902168ef,},Annotations:map[string]string{io.kubernetes.container.hash: 9f5c718,io.kub
ernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f69e4441a726e60743f3b455745e4e6c6f864fed3b53b698ff1c60e6aee1149,PodSandboxId:d153845d50615e2faf0233e9f874ca617772b2d01633ff7cd893dcf3cf5d5151,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,State:CONTAINER_RUNNING,CreatedAt:1705373371257386126,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6170b55d-0a92-4ef9-8ac8-5b5318785c81,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 9b5d2d4,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e648e5936de4fb8862b826a6abd299ffb3f2033209fa8d9253220af2dc60a976,PodSandboxId:e75ebfda94432c06cf770243fcd36f373da1720f0050183548c20fe84fc0b967,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1705373356411548434,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-hvvjf,io.k
ubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5388ae77-8a82-4a73-b7d7-62af816e0395,},Annotations:map[string]string{io.kubernetes.container.hash: 6148bbd,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:d3131909becfd478e2b8ef744f7fd4a01252b2de57f26ba8651e2eb022f7e99b,PodSandboxId:ce665cd6b45ba83fe2fdd2a49b7fc5863ce1920a8aa5a38573aa861ca7f31c99,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58
fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1705373347252699563,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-249cr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: be7fe168-c1dd-4dbf-9a5f-f1e45b96d2d1,},Annotations:map[string]string{io.kubernetes.container.hash: 27666c5d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b40bfa7e518c1c5b1173ab9fe9bae4f97a875bc924bbd3ae907d46b40677a33f,PodSandboxId:3528caf38c5a9f98010cb1093a4890fe663306c550249fe72252cc1da50ce57e,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-ce
rtgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1705373347086965973,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-x9hc5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ce1407d2-4586-4a61-88bd-65a4deabf7a9,},Annotations:map[string]string{io.kubernetes.container.hash: 1141e1bd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a3048f8f47474d556463086cb9a7516f456656003e40898fa488cbb34e6cabb,PodSandboxId:8d8f5d854e2b9a219406e51052d71412da0d84c27c43906eef27f41a92940943,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&
ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705373332721089260,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cd7c29b-22f5-4ea4-aea0-a432b5118d7d,},Annotations:map[string]string{io.kubernetes.container.hash: 8697f819,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bfd91393e3a691d4826d9e12499723bc15e506075a22a6de44f4aee4794d934,PodSandboxId:e2f1fb5671ec3e66ffecbe8471cca6d45f0a6b93322989a7397eeb2b86978904,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Ima
ge:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1705373332161458634,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-hwqvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab64aaef-3aef-4b36-b7d4-3ad702e17718,},Annotations:map[string]string{io.kubernetes.container.hash: 7574c764,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9daaa37842805ade4aa44595ed72a9
5928a617d3984d8d059aa77ea8d43cabb,PodSandboxId:ab75c1392e06ae02f948625a3eb75c6fa10cd62bdb14d4953112d30aee7b5695,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1705373331805979717,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mvnvh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4184569d-f5d0-40b1-802b-cd1f558304d3,},Annotations:map[string]string{io.kubernetes.container.hash: 4fe6bb30,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bda79e72af777eb3a141d27f7cee637b74b98cbd4a2ead1225f4f21f78e3c25,PodSan
dboxId:13d6eb184941632275e69572caff09f7d5c366482459c80a1d15d7def4bf55c2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1705373307849487060,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-873808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f72c4b0405a72fa57cb4b7e8ce6fab22,},Annotations:map[string]string{io.kubernetes.container.hash: ea6512b3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a588da4f60d930351e119b0fdf936cd7f03ef1fb330a9a9bf5af2b478fed08ed,PodSandboxId:4e122a0c30b55dd8efc42e8deec973b33efe002
cbb3e4a44848039092ccfe7bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1705373306632324492,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-873808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59cfc9c0af1108127d65b437ff5c367bf61527e4379ee10d9dc747599551f0fb,PodSandboxId:41ecb04315f7c2318fffbcfcace0dcd4f0b46cfac1f55
6419ba8cfc167407a79,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1705373306478293034,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-873808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2778eb9f038ed007254b179ee8a954a,},Annotations:map[string]string{io.kubernetes.container.hash: 5d5e5099,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9484c47348031b007cc06d638e27434fe72df9a290be6952bf1dc77ed7eede63,PodSandboxId:d92cd78814b137d235c8f944b9b11aee09964ae77d658900b3a
f8cbc9f5be70d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1705373306310643441,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-873808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e4901815-7cde-4317-bf4b-2828416b6637 name=/runtime.v1.RuntimeServic
e/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4785c0b68835c       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7            11 seconds ago      Running             hello-world-app           0                   caf1e7744a8e4       hello-world-app-5f5d8b66bb-dnv7g
	4f69e4441a726       docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686                    2 minutes ago       Running             nginx                     0                   d153845d50615       nginx
	e648e5936de4f       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   2 minutes ago       Exited              controller                0                   e75ebfda94432       ingress-nginx-controller-7fcf777cb7-hvvjf
	d3131909becfd       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     2 minutes ago       Exited              patch                     0                   ce665cd6b45ba       ingress-nginx-admission-patch-249cr
	b40bfa7e518c1       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     2 minutes ago       Exited              create                    0                   3528caf38c5a9       ingress-nginx-admission-create-x9hc5
	1a3048f8f4747       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                   3 minutes ago       Running             storage-provisioner       0                   8d8f5d854e2b9       storage-provisioner
	0bfd91393e3a6       67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5                                                   3 minutes ago       Running             coredns                   0                   e2f1fb5671ec3       coredns-66bff467f8-hwqvx
	d9daaa3784280       27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba                                                   3 minutes ago       Running             kube-proxy                0                   ab75c1392e06a       kube-proxy-mvnvh
	3bda79e72af77       303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f                                                   3 minutes ago       Running             etcd                      0                   13d6eb1849416       etcd-ingress-addon-legacy-873808
	a588da4f60d93       a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346                                                   3 minutes ago       Running             kube-scheduler            0                   4e122a0c30b55       kube-scheduler-ingress-addon-legacy-873808
	59cfc9c0af110       7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1                                                   3 minutes ago       Running             kube-apiserver            0                   41ecb04315f7c       kube-apiserver-ingress-addon-legacy-873808
	9484c47348031       e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290                                                   3 minutes ago       Running             kube-controller-manager   0                   d92cd78814b13       kube-controller-manager-ingress-addon-legacy-873808
	
	
	==> coredns [0bfd91393e3a691d4826d9e12499723bc15e506075a22a6de44f4aee4794d934] <==
	[INFO] 10.244.0.5:54746 - 12591 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000112389s
	[INFO] 10.244.0.5:60331 - 54581 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000112317s
	[INFO] 10.244.0.5:54746 - 13150 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000261018s
	[INFO] 10.244.0.5:54746 - 38159 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00008706s
	[INFO] 10.244.0.5:60331 - 39438 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000032321s
	[INFO] 10.244.0.5:60331 - 4329 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000079509s
	[INFO] 10.244.0.5:54746 - 5108 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000029957s
	[INFO] 10.244.0.5:60331 - 12437 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00007238s
	[INFO] 10.244.0.5:54746 - 43495 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000027464s
	[INFO] 10.244.0.5:60331 - 38517 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000100869s
	[INFO] 10.244.0.5:54746 - 47127 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000056084s
	[INFO] 10.244.0.5:52139 - 58398 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000142313s
	[INFO] 10.244.0.5:37169 - 2384 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000043304s
	[INFO] 10.244.0.5:52139 - 8554 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00008001s
	[INFO] 10.244.0.5:52139 - 34732 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000049703s
	[INFO] 10.244.0.5:37169 - 1765 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000022112s
	[INFO] 10.244.0.5:52139 - 30109 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000037342s
	[INFO] 10.244.0.5:37169 - 17416 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000018676s
	[INFO] 10.244.0.5:52139 - 17709 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000099076s
	[INFO] 10.244.0.5:37169 - 31208 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000034046s
	[INFO] 10.244.0.5:37169 - 43052 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000033002s
	[INFO] 10.244.0.5:52139 - 16942 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000018002s
	[INFO] 10.244.0.5:52139 - 65395 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000034868s
	[INFO] 10.244.0.5:37169 - 63827 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000018847s
	[INFO] 10.244.0.5:37169 - 303 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000072324s
	
	
	==> describe nodes <==
	Name:               ingress-addon-legacy-873808
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ingress-addon-legacy-873808
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e8fa5f64d0e7272be43ff25ed3826261f0a2578
	                    minikube.k8s.io/name=ingress-addon-legacy-873808
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_16T02_48_35_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Jan 2024 02:48:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-873808
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Jan 2024 02:51:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Jan 2024 02:49:35 +0000   Tue, 16 Jan 2024 02:48:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Jan 2024 02:49:35 +0000   Tue, 16 Jan 2024 02:48:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Jan 2024 02:49:35 +0000   Tue, 16 Jan 2024 02:48:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Jan 2024 02:49:35 +0000   Tue, 16 Jan 2024 02:48:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.242
	  Hostname:    ingress-addon-legacy-873808
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             4012800Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             4012800Ki
	  pods:               110
	System Info:
	  Machine ID:                 b9ee87dcc81241a28d0b9aedc930c7af
	  System UUID:                b9ee87dc-c812-41a2-8d0b-9aedc930c7af
	  Boot ID:                    196ff06a-6a73-438f-aead-21cd45aa76ec
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-dnv7g                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m36s
	  kube-system                 coredns-66bff467f8-hwqvx                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     3m15s
	  kube-system                 etcd-ingress-addon-legacy-873808                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m29s
	  kube-system                 kube-apiserver-ingress-addon-legacy-873808             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m29s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-873808    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m29s
	  kube-system                 kube-proxy-mvnvh                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m14s
	  kube-system                 kube-scheduler-ingress-addon-legacy-873808             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m29s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             70Mi (1%!)(MISSING)   170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  3m39s (x5 over 3m40s)  kubelet     Node ingress-addon-legacy-873808 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m39s (x5 over 3m40s)  kubelet     Node ingress-addon-legacy-873808 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m39s (x5 over 3m40s)  kubelet     Node ingress-addon-legacy-873808 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m29s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m29s                  kubelet     Node ingress-addon-legacy-873808 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m29s                  kubelet     Node ingress-addon-legacy-873808 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m29s                  kubelet     Node ingress-addon-legacy-873808 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m29s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m19s                  kubelet     Node ingress-addon-legacy-873808 status is now: NodeReady
	  Normal  Starting                 3m12s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[Jan16 02:47] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.096011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.479742] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.462146] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.150654] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[Jan16 02:48] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.403911] systemd-fstab-generator[645]: Ignoring "noauto" for root device
	[  +0.126537] systemd-fstab-generator[656]: Ignoring "noauto" for root device
	[  +0.141880] systemd-fstab-generator[669]: Ignoring "noauto" for root device
	[  +0.116157] systemd-fstab-generator[680]: Ignoring "noauto" for root device
	[  +0.219081] systemd-fstab-generator[704]: Ignoring "noauto" for root device
	[  +7.687011] systemd-fstab-generator[1026]: Ignoring "noauto" for root device
	[  +2.945724] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[  +9.805307] systemd-fstab-generator[1419]: Ignoring "noauto" for root device
	[ +17.238503] kauditd_printk_skb: 6 callbacks suppressed
	[Jan16 02:49] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.456103] kauditd_printk_skb: 6 callbacks suppressed
	[ +20.145531] kauditd_printk_skb: 7 callbacks suppressed
	[Jan16 02:51] kauditd_printk_skb: 5 callbacks suppressed
	[  +6.005640] kauditd_printk_skb: 1 callbacks suppressed
	
	
	==> etcd [3bda79e72af777eb3a141d27f7cee637b74b98cbd4a2ead1225f4f21f78e3c25] <==
	2024-01-16 02:48:28.039974 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2024-01-16 02:48:28.043733 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2024-01-16 02:48:28.044891 I | embed: listening for metrics on http://127.0.0.1:2381
	2024-01-16 02:48:28.045070 I | embed: listening for peers on 192.168.39.242:2380
	2024-01-16 02:48:28.045114 I | etcdserver: 5245f38ecce3eccc as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2024/01/16 02:48:28 INFO: 5245f38ecce3eccc switched to configuration voters=(5928412279151520972)
	2024-01-16 02:48:28.045386 I | etcdserver/membership: added member 5245f38ecce3eccc [https://192.168.39.242:2380] to cluster 9dd55050173e419e
	raft2024/01/16 02:48:28 INFO: 5245f38ecce3eccc is starting a new election at term 1
	raft2024/01/16 02:48:28 INFO: 5245f38ecce3eccc became candidate at term 2
	raft2024/01/16 02:48:28 INFO: 5245f38ecce3eccc received MsgVoteResp from 5245f38ecce3eccc at term 2
	raft2024/01/16 02:48:28 INFO: 5245f38ecce3eccc became leader at term 2
	raft2024/01/16 02:48:28 INFO: raft.node: 5245f38ecce3eccc elected leader 5245f38ecce3eccc at term 2
	2024-01-16 02:48:28.625921 I | etcdserver: setting up the initial cluster version to 3.4
	2024-01-16 02:48:28.627577 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-01-16 02:48:28.627670 I | etcdserver/api: enabled capabilities for version 3.4
	2024-01-16 02:48:28.627708 I | etcdserver: published {Name:ingress-addon-legacy-873808 ClientURLs:[https://192.168.39.242:2379]} to cluster 9dd55050173e419e
	2024-01-16 02:48:28.627846 I | embed: ready to serve client requests
	2024-01-16 02:48:28.628141 I | embed: ready to serve client requests
	2024-01-16 02:48:28.629471 I | embed: serving client requests on 127.0.0.1:2379
	2024-01-16 02:48:28.629702 I | embed: serving client requests on 192.168.39.242:2379
	2024-01-16 02:48:50.443821 W | etcdserver: request "header:<ID:17063168188790111317 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/endpointslices/kube-system/kube-dns-wphch\" mod_revision:329 > success:<request_put:<key:\"/registry/endpointslices/kube-system/kube-dns-wphch\" value_size:775 >> failure:<request_range:<key:\"/registry/endpointslices/kube-system/kube-dns-wphch\" > >>" with result "size:16" took too long (406.882351ms) to execute
	2024-01-16 02:48:50.444882 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (292.32621ms) to execute
	2024-01-16 02:48:50.445402 W | etcdserver: read-only range request "key:\"/registry/minions/ingress-addon-legacy-873808\" " with result "range_response_count:1 size:6239" took too long (140.868756ms) to execute
	2024-01-16 02:49:13.525401 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:1 size:257" took too long (137.241181ms) to execute
	2024-01-16 02:49:37.451869 W | etcdserver: read-only range request "key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" " with result "range_response_count:1 size:2214" took too long (133.929608ms) to execute
	
	
	==> kernel <==
	 02:52:04 up 4 min,  0 users,  load average: 0.85, 0.54, 0.24
	Linux ingress-addon-legacy-873808 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [59cfc9c0af1108127d65b437ff5c367bf61527e4379ee10d9dc747599551f0fb] <==
	I0116 02:48:49.493073       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0116 02:48:49.926098       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0116 02:48:50.446602       1 trace.go:116] Trace[126709432]: "GuaranteedUpdate etcd3" type:*discovery.EndpointSlice (started: 2024-01-16 02:48:49.913472061 +0000 UTC m=+23.218126650) (total time: 533.099534ms):
	Trace[126709432]: [533.083819ms] [526.422389ms] Transaction committed
	I0116 02:48:50.446730       1 trace.go:116] Trace[724028793]: "Update" url:/apis/discovery.k8s.io/v1beta1/namespaces/kube-system/endpointslices/kube-dns-wphch,user-agent:kube-controller-manager/v1.18.20 (linux/amd64) kubernetes/1f3e19b/system:serviceaccount:kube-system:endpointslice-controller,client:192.168.39.242 (started: 2024-01-16 02:48:49.912778089 +0000 UTC m=+23.217432667) (total time: 533.936435ms):
	Trace[724028793]: [533.893871ms] [533.437395ms] Object stored in database
	I0116 02:48:50.446612       1 trace.go:116] Trace[586590902]: "Create" url:/apis/apps/v1/namespaces/kube-system/controllerrevisions,user-agent:kube-controller-manager/v1.18.20 (linux/amd64) kubernetes/1f3e19b/system:serviceaccount:kube-system:daemon-set-controller,client:192.168.39.242 (started: 2024-01-16 02:48:49.92084292 +0000 UTC m=+23.225497498) (total time: 525.742081ms):
	Trace[586590902]: [525.698295ms] [521.421317ms] Object stored in database
	I0116 02:48:50.449097       1 trace.go:116] Trace[420171929]: "GuaranteedUpdate etcd3" type:*core.Node (started: 2024-01-16 02:48:49.923589158 +0000 UTC m=+23.228243760) (total time: 525.487226ms):
	Trace[420171929]: [525.302533ms] [522.825018ms] Transaction committed
	I0116 02:48:50.456975       1 trace.go:116] Trace[1626904539]: "Create" url:/api/v1/namespaces/default/events,user-agent:kube-controller-manager/v1.18.20 (linux/amd64) kubernetes/1f3e19b/system:serviceaccount:kube-system:node-controller,client:192.168.39.242 (started: 2024-01-16 02:48:49.923132031 +0000 UTC m=+23.227786615) (total time: 533.740769ms):
	Trace[1626904539]: [533.627394ms] [532.181421ms] Object stored in database
	I0116 02:48:50.460068       1 trace.go:116] Trace[866791855]: "Patch" url:/api/v1/nodes/ingress-addon-legacy-873808,user-agent:kube-controller-manager/v1.18.20 (linux/amd64) kubernetes/1f3e19b/system:serviceaccount:kube-system:ttl-controller,client:192.168.39.242 (started: 2024-01-16 02:48:49.917235292 +0000 UTC m=+23.221889884) (total time: 542.800279ms):
	Trace[866791855]: [531.96645ms] [523.55154ms] Object stored in database
	I0116 02:48:50.463319       1 trace.go:116] Trace[1553921047]: "GuaranteedUpdate etcd3" type:*core.Node (started: 2024-01-16 02:48:49.928461514 +0000 UTC m=+23.233116113) (total time: 534.836417ms):
	Trace[1553921047]: [520.905625ms] [518.67188ms] Transaction committed
	I0116 02:48:50.466095       1 trace.go:116] Trace[1643467497]: "Patch" url:/api/v1/nodes/ingress-addon-legacy-873808,user-agent:kube-controller-manager/v1.18.20 (linux/amd64) kubernetes/1f3e19b/system:serviceaccount:kube-system:node-controller,client:192.168.39.242 (started: 2024-01-16 02:48:49.928366725 +0000 UTC m=+23.233021307) (total time: 537.699043ms):
	Trace[1643467497]: [521.055568ms] [519.362743ms] About to apply patch
	I0116 02:48:50.475013       1 trace.go:116] Trace[1569058581]: "GuaranteedUpdate etcd3" type:*core.Node (started: 2024-01-16 02:48:49.965136926 +0000 UTC m=+23.269791530) (total time: 509.858306ms):
	Trace[1569058581]: [485.733135ms] [479.760629ms] Transaction committed
	I0116 02:48:50.477087       1 trace.go:116] Trace[1259438379]: "Patch" url:/api/v1/nodes/ingress-addon-legacy-873808,user-agent:kube-controller-manager/v1.18.20 (linux/amd64) kubernetes/1f3e19b/system:serviceaccount:kube-system:node-controller,client:192.168.39.242 (started: 2024-01-16 02:48:49.9645935 +0000 UTC m=+23.269248103) (total time: 512.465786ms):
	Trace[1259438379]: [486.327526ms] [480.126227ms] About to apply patch
	I0116 02:49:04.850302       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0116 02:49:28.078571       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	E0116 02:51:58.358101       1 authentication.go:53] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]
	
	
	==> kube-controller-manager [9484c47348031b007cc06d638e27434fe72df9a290be6952bf1dc77ed7eede63] <==
	I0116 02:48:49.935061       1 shared_informer.go:230] Caches are synced for HPA 
	I0116 02:48:49.937584       1 shared_informer.go:230] Caches are synced for GC 
	I0116 02:48:49.940495       1 shared_informer.go:230] Caches are synced for resource quota 
	I0116 02:48:49.962311       1 shared_informer.go:230] Caches are synced for node 
	I0116 02:48:49.962350       1 range_allocator.go:172] Starting range CIDR allocator
	I0116 02:48:49.962355       1 shared_informer.go:223] Waiting for caches to sync for cidrallocator
	I0116 02:48:49.962359       1 shared_informer.go:230] Caches are synced for cidrallocator 
	I0116 02:48:49.971332       1 shared_informer.go:230] Caches are synced for resource quota 
	I0116 02:48:49.980067       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0116 02:48:50.024338       1 shared_informer.go:230] Caches are synced for namespace 
	I0116 02:48:50.050259       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0116 02:48:50.050319       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0116 02:48:50.065453       1 shared_informer.go:230] Caches are synced for service account 
	I0116 02:48:50.463928       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"de7efba5-be21-40d0-9b8f-aaa10ac171e1", APIVersion:"apps/v1", ResourceVersion:"212", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-mvnvh
	I0116 02:48:50.486510       1 range_allocator.go:373] Set node ingress-addon-legacy-873808 PodCIDR to [10.244.0.0/24]
	I0116 02:48:50.653233       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"de231f00-395e-42fe-9cd6-94699e82dab7", APIVersion:"apps/v1", ResourceVersion:"355", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I0116 02:48:50.701779       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"4ecd6bd8-b502-4177-9936-90dc4680fbe7", APIVersion:"apps/v1", ResourceVersion:"356", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-488jf
	I0116 02:49:04.834802       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"45d03425-f57e-421c-a280-7901a39e3b6f", APIVersion:"apps/v1", ResourceVersion:"447", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0116 02:49:04.855009       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"ccdade59-81f5-419a-9838-98aba1ea74f5", APIVersion:"apps/v1", ResourceVersion:"448", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-hvvjf
	I0116 02:49:04.934235       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"07cac91f-edbd-4e28-a83a-a9b267221cb9", APIVersion:"batch/v1", ResourceVersion:"452", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-x9hc5
	I0116 02:49:05.008591       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"35e5c80e-1687-497c-836a-8692ec72f729", APIVersion:"batch/v1", ResourceVersion:"467", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-249cr
	I0116 02:49:07.712826       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"35e5c80e-1687-497c-836a-8692ec72f729", APIVersion:"batch/v1", ResourceVersion:"475", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0116 02:49:07.739978       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"07cac91f-edbd-4e28-a83a-a9b267221cb9", APIVersion:"batch/v1", ResourceVersion:"463", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0116 02:51:49.827108       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"f7649581-667d-4e3c-9874-2f8d1b63d2e1", APIVersion:"apps/v1", ResourceVersion:"665", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0116 02:51:49.842874       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"aa6ba4c2-7d61-4db0-b582-2ad97478238e", APIVersion:"apps/v1", ResourceVersion:"666", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-dnv7g
	
	
	==> kube-proxy [d9daaa37842805ade4aa44595ed72a95928a617d3984d8d059aa77ea8d43cabb] <==
	W0116 02:48:52.158606       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0116 02:48:52.189101       1 node.go:136] Successfully retrieved node IP: 192.168.39.242
	I0116 02:48:52.189318       1 server_others.go:186] Using iptables Proxier.
	I0116 02:48:52.192000       1 server.go:583] Version: v1.18.20
	I0116 02:48:52.194242       1 config.go:315] Starting service config controller
	I0116 02:48:52.194300       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0116 02:48:52.194854       1 config.go:133] Starting endpoints config controller
	I0116 02:48:52.194905       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0116 02:48:52.296459       1 shared_informer.go:230] Caches are synced for endpoints config 
	I0116 02:48:52.302377       1 shared_informer.go:230] Caches are synced for service config 
	
	
	==> kube-scheduler [a588da4f60d930351e119b0fdf936cd7f03ef1fb330a9a9bf5af2b478fed08ed] <==
	I0116 02:48:31.767897       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0116 02:48:31.769588       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0116 02:48:31.771379       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0116 02:48:31.774317       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0116 02:48:31.780646       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0116 02:48:31.781381       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0116 02:48:31.782305       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0116 02:48:31.782378       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0116 02:48:31.782423       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0116 02:48:31.782473       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0116 02:48:31.782525       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0116 02:48:31.782574       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0116 02:48:31.782620       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0116 02:48:31.782667       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0116 02:48:31.782712       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0116 02:48:31.782760       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0116 02:48:31.782881       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0116 02:48:32.664257       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0116 02:48:32.690820       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0116 02:48:32.874756       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0116 02:48:32.883758       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0116 02:48:32.930525       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0116 02:48:34.480996       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E0116 02:48:49.552980       1 factory.go:503] pod: kube-system/coredns-66bff467f8-488jf is already present in the active queue
	E0116 02:48:49.604717       1 factory.go:503] pod: kube-system/coredns-66bff467f8-hwqvx is already present in the active queue
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-01-16 02:47:59 UTC, ends at Tue 2024-01-16 02:52:04 UTC. --
	Jan 16 02:49:08 ingress-addon-legacy-873808 kubelet[1426]: W0116 02:49:08.699528    1426 pod_container_deletor.go:77] Container "ce665cd6b45ba83fe2fdd2a49b7fc5863ce1920a8aa5a38573aa861ca7f31c99" not found in pod's containers
	Jan 16 02:49:08 ingress-addon-legacy-873808 kubelet[1426]: W0116 02:49:08.702378    1426 pod_container_deletor.go:77] Container "3528caf38c5a9f98010cb1093a4890fe663306c550249fe72252cc1da50ce57e" not found in pod's containers
	Jan 16 02:49:09 ingress-addon-legacy-873808 kubelet[1426]: I0116 02:49:09.898828    1426 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-admission-token-wqsz8" (UniqueName: "kubernetes.io/secret/ce1407d2-4586-4a61-88bd-65a4deabf7a9-ingress-nginx-admission-token-wqsz8") pod "ce1407d2-4586-4a61-88bd-65a4deabf7a9" (UID: "ce1407d2-4586-4a61-88bd-65a4deabf7a9")
	Jan 16 02:49:09 ingress-addon-legacy-873808 kubelet[1426]: I0116 02:49:09.917937    1426 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce1407d2-4586-4a61-88bd-65a4deabf7a9-ingress-nginx-admission-token-wqsz8" (OuterVolumeSpecName: "ingress-nginx-admission-token-wqsz8") pod "ce1407d2-4586-4a61-88bd-65a4deabf7a9" (UID: "ce1407d2-4586-4a61-88bd-65a4deabf7a9"). InnerVolumeSpecName "ingress-nginx-admission-token-wqsz8". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 16 02:49:09 ingress-addon-legacy-873808 kubelet[1426]: I0116 02:49:09.999212    1426 reconciler.go:319] Volume detached for volume "ingress-nginx-admission-token-wqsz8" (UniqueName: "kubernetes.io/secret/ce1407d2-4586-4a61-88bd-65a4deabf7a9-ingress-nginx-admission-token-wqsz8") on node "ingress-addon-legacy-873808" DevicePath ""
	Jan 16 02:49:18 ingress-addon-legacy-873808 kubelet[1426]: I0116 02:49:18.169365    1426 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Jan 16 02:49:18 ingress-addon-legacy-873808 kubelet[1426]: I0116 02:49:18.332457    1426 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "minikube-ingress-dns-token-wg8jr" (UniqueName: "kubernetes.io/secret/1a062bae-f0c1-4d4d-93da-1c1124f32680-minikube-ingress-dns-token-wg8jr") pod "kube-ingress-dns-minikube" (UID: "1a062bae-f0c1-4d4d-93da-1c1124f32680")
	Jan 16 02:49:28 ingress-addon-legacy-873808 kubelet[1426]: I0116 02:49:28.275640    1426 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Jan 16 02:49:28 ingress-addon-legacy-873808 kubelet[1426]: I0116 02:49:28.368575    1426 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-7zh75" (UniqueName: "kubernetes.io/secret/6170b55d-0a92-4ef9-8ac8-5b5318785c81-default-token-7zh75") pod "nginx" (UID: "6170b55d-0a92-4ef9-8ac8-5b5318785c81")
	Jan 16 02:51:49 ingress-addon-legacy-873808 kubelet[1426]: I0116 02:51:49.856782    1426 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Jan 16 02:51:49 ingress-addon-legacy-873808 kubelet[1426]: I0116 02:51:49.982290    1426 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-7zh75" (UniqueName: "kubernetes.io/secret/e8e50dcc-5619-4e0f-8c16-e525902168ef-default-token-7zh75") pod "hello-world-app-5f5d8b66bb-dnv7g" (UID: "e8e50dcc-5619-4e0f-8c16-e525902168ef")
	Jan 16 02:51:51 ingress-addon-legacy-873808 kubelet[1426]: I0116 02:51:51.166357    1426 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 7440fb55aa4a963c308d49bedefee7a5339608c94d77e3b8169ddc5b944863ff
	Jan 16 02:51:52 ingress-addon-legacy-873808 kubelet[1426]: I0116 02:51:52.290403    1426 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-wg8jr" (UniqueName: "kubernetes.io/secret/1a062bae-f0c1-4d4d-93da-1c1124f32680-minikube-ingress-dns-token-wg8jr") pod "1a062bae-f0c1-4d4d-93da-1c1124f32680" (UID: "1a062bae-f0c1-4d4d-93da-1c1124f32680")
	Jan 16 02:51:52 ingress-addon-legacy-873808 kubelet[1426]: I0116 02:51:52.303008    1426 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a062bae-f0c1-4d4d-93da-1c1124f32680-minikube-ingress-dns-token-wg8jr" (OuterVolumeSpecName: "minikube-ingress-dns-token-wg8jr") pod "1a062bae-f0c1-4d4d-93da-1c1124f32680" (UID: "1a062bae-f0c1-4d4d-93da-1c1124f32680"). InnerVolumeSpecName "minikube-ingress-dns-token-wg8jr". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 16 02:51:52 ingress-addon-legacy-873808 kubelet[1426]: I0116 02:51:52.390800    1426 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-wg8jr" (UniqueName: "kubernetes.io/secret/1a062bae-f0c1-4d4d-93da-1c1124f32680-minikube-ingress-dns-token-wg8jr") on node "ingress-addon-legacy-873808" DevicePath ""
	Jan 16 02:51:56 ingress-addon-legacy-873808 kubelet[1426]: E0116 02:51:56.470281    1426 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-hvvjf.17aab42d8249d35a", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-hvvjf", UID:"5388ae77-8a82-4a73-b7d7-62af816e0395", APIVersion:"v1", ResourceVersion:"453", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-873808"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc16199b31bbfdb5a, ext:201569432787, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc16199b31bbfdb5a, ext:201569432787, loc:(*time.Location)(0x701e5a0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-hvvjf.17aab42d8249d35a" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jan 16 02:51:56 ingress-addon-legacy-873808 kubelet[1426]: E0116 02:51:56.484203    1426 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-hvvjf.17aab42d8249d35a", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-hvvjf", UID:"5388ae77-8a82-4a73-b7d7-62af816e0395", APIVersion:"v1", ResourceVersion:"453", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-873808"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc16199b31bbfdb5a, ext:201569432787, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc16199b31c6fbc1c, ext:201580959128, loc:(*time.Location)(0x701e5a0)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-hvvjf.17aab42d8249d35a" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jan 16 02:51:59 ingress-addon-legacy-873808 kubelet[1426]: W0116 02:51:59.199415    1426 pod_container_deletor.go:77] Container "e75ebfda94432c06cf770243fcd36f373da1720f0050183548c20fe84fc0b967" not found in pod's containers
	Jan 16 02:52:00 ingress-addon-legacy-873808 kubelet[1426]: I0116 02:52:00.521199    1426 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/5388ae77-8a82-4a73-b7d7-62af816e0395-webhook-cert") pod "5388ae77-8a82-4a73-b7d7-62af816e0395" (UID: "5388ae77-8a82-4a73-b7d7-62af816e0395")
	Jan 16 02:52:00 ingress-addon-legacy-873808 kubelet[1426]: I0116 02:52:00.521273    1426 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-zsdl5" (UniqueName: "kubernetes.io/secret/5388ae77-8a82-4a73-b7d7-62af816e0395-ingress-nginx-token-zsdl5") pod "5388ae77-8a82-4a73-b7d7-62af816e0395" (UID: "5388ae77-8a82-4a73-b7d7-62af816e0395")
	Jan 16 02:52:00 ingress-addon-legacy-873808 kubelet[1426]: I0116 02:52:00.525370    1426 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5388ae77-8a82-4a73-b7d7-62af816e0395-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "5388ae77-8a82-4a73-b7d7-62af816e0395" (UID: "5388ae77-8a82-4a73-b7d7-62af816e0395"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 16 02:52:00 ingress-addon-legacy-873808 kubelet[1426]: I0116 02:52:00.526094    1426 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5388ae77-8a82-4a73-b7d7-62af816e0395-ingress-nginx-token-zsdl5" (OuterVolumeSpecName: "ingress-nginx-token-zsdl5") pod "5388ae77-8a82-4a73-b7d7-62af816e0395" (UID: "5388ae77-8a82-4a73-b7d7-62af816e0395"). InnerVolumeSpecName "ingress-nginx-token-zsdl5". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 16 02:52:00 ingress-addon-legacy-873808 kubelet[1426]: I0116 02:52:00.621655    1426 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/5388ae77-8a82-4a73-b7d7-62af816e0395-webhook-cert") on node "ingress-addon-legacy-873808" DevicePath ""
	Jan 16 02:52:00 ingress-addon-legacy-873808 kubelet[1426]: I0116 02:52:00.621716    1426 reconciler.go:319] Volume detached for volume "ingress-nginx-token-zsdl5" (UniqueName: "kubernetes.io/secret/5388ae77-8a82-4a73-b7d7-62af816e0395-ingress-nginx-token-zsdl5") on node "ingress-addon-legacy-873808" DevicePath ""
	Jan 16 02:52:01 ingress-addon-legacy-873808 kubelet[1426]: W0116 02:52:01.537088    1426 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/5388ae77-8a82-4a73-b7d7-62af816e0395/volumes" does not exist
	
	
	==> storage-provisioner [1a3048f8f47474d556463086cb9a7516f456656003e40898fa488cbb34e6cabb] <==
	I0116 02:48:52.836460       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0116 02:48:52.846692       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0116 02:48:52.848802       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0116 02:48:52.858676       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0116 02:48:52.858954       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-873808_62f762e3-7863-4028-9b3a-ce6597ec1c4a!
	I0116 02:48:52.860334       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"55970afe-9abd-4ef4-9481-6bea4c27657e", APIVersion:"v1", ResourceVersion:"398", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-873808_62f762e3-7863-4028-9b3a-ce6597ec1c4a became leader
	I0116 02:48:52.959123       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-873808_62f762e3-7863-4028-9b3a-ce6597ec1c4a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ingress-addon-legacy-873808 -n ingress-addon-legacy-873808
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-873808 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (166.94s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (3.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-405494 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-405494 -- exec busybox-5bc68d56bd-pkhcp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-405494 -- exec busybox-5bc68d56bd-pkhcp -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:599: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-405494 -- exec busybox-5bc68d56bd-pkhcp -- sh -c "ping -c 1 192.168.39.1": exit status 1 (201.63328ms)

                                                
                                                
-- stdout --
	PING 192.168.39.1 (192.168.39.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:600: Failed to ping host (192.168.39.1) from pod (busybox-5bc68d56bd-pkhcp): exit status 1
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-405494 -- exec busybox-5bc68d56bd-r9bv6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-405494 -- exec busybox-5bc68d56bd-r9bv6 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:599: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-405494 -- exec busybox-5bc68d56bd-r9bv6 -- sh -c "ping -c 1 192.168.39.1": exit status 1 (199.221756ms)

                                                
                                                
-- stdout --
	PING 192.168.39.1 (192.168.39.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:600: Failed to ping host (192.168.39.1) from pod (busybox-5bc68d56bd-r9bv6): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-405494 -n multinode-405494
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-405494 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-405494 logs -n 25: (1.368838983s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | mount-start-2-560349 ssh -- ls                    | mount-start-2-560349 | jenkins | v1.32.0 | 16 Jan 24 02:56 UTC | 16 Jan 24 02:56 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| ssh     | mount-start-2-560349 ssh --                       | mount-start-2-560349 | jenkins | v1.32.0 | 16 Jan 24 02:56 UTC | 16 Jan 24 02:56 UTC |
	|         | mount | grep 9p                                   |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-560349                           | mount-start-2-560349 | jenkins | v1.32.0 | 16 Jan 24 02:56 UTC | 16 Jan 24 02:56 UTC |
	| start   | -p mount-start-2-560349                           | mount-start-2-560349 | jenkins | v1.32.0 | 16 Jan 24 02:56 UTC | 16 Jan 24 02:56 UTC |
	| mount   | /home/jenkins:/minikube-host                      | mount-start-2-560349 | jenkins | v1.32.0 | 16 Jan 24 02:56 UTC |                     |
	|         | --profile mount-start-2-560349                    |                      |         |         |                     |                     |
	|         | --v 0 --9p-version 9p2000.L                       |                      |         |         |                     |                     |
	|         | --gid 0 --ip  --msize 6543                        |                      |         |         |                     |                     |
	|         | --port 46465 --type 9p --uid 0                    |                      |         |         |                     |                     |
	| ssh     | mount-start-2-560349 ssh -- ls                    | mount-start-2-560349 | jenkins | v1.32.0 | 16 Jan 24 02:56 UTC | 16 Jan 24 02:56 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| ssh     | mount-start-2-560349 ssh --                       | mount-start-2-560349 | jenkins | v1.32.0 | 16 Jan 24 02:56 UTC | 16 Jan 24 02:56 UTC |
	|         | mount | grep 9p                                   |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-560349                           | mount-start-2-560349 | jenkins | v1.32.0 | 16 Jan 24 02:56 UTC | 16 Jan 24 02:56 UTC |
	| delete  | -p mount-start-1-538527                           | mount-start-1-538527 | jenkins | v1.32.0 | 16 Jan 24 02:56 UTC | 16 Jan 24 02:56 UTC |
	| start   | -p multinode-405494                               | multinode-405494     | jenkins | v1.32.0 | 16 Jan 24 02:56 UTC | 16 Jan 24 02:58 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=kvm2                                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-405494 -- apply -f                   | multinode-405494     | jenkins | v1.32.0 | 16 Jan 24 02:58 UTC | 16 Jan 24 02:58 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-405494 -- rollout                    | multinode-405494     | jenkins | v1.32.0 | 16 Jan 24 02:58 UTC | 16 Jan 24 02:58 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-405494 -- get pods -o                | multinode-405494     | jenkins | v1.32.0 | 16 Jan 24 02:58 UTC | 16 Jan 24 02:58 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-405494 -- get pods -o                | multinode-405494     | jenkins | v1.32.0 | 16 Jan 24 02:58 UTC | 16 Jan 24 02:58 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-405494 -- exec                       | multinode-405494     | jenkins | v1.32.0 | 16 Jan 24 02:58 UTC | 16 Jan 24 02:58 UTC |
	|         | busybox-5bc68d56bd-pkhcp --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-405494 -- exec                       | multinode-405494     | jenkins | v1.32.0 | 16 Jan 24 02:58 UTC | 16 Jan 24 02:58 UTC |
	|         | busybox-5bc68d56bd-r9bv6 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-405494 -- exec                       | multinode-405494     | jenkins | v1.32.0 | 16 Jan 24 02:58 UTC | 16 Jan 24 02:58 UTC |
	|         | busybox-5bc68d56bd-pkhcp --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-405494 -- exec                       | multinode-405494     | jenkins | v1.32.0 | 16 Jan 24 02:58 UTC | 16 Jan 24 02:58 UTC |
	|         | busybox-5bc68d56bd-r9bv6 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-405494 -- exec                       | multinode-405494     | jenkins | v1.32.0 | 16 Jan 24 02:58 UTC | 16 Jan 24 02:58 UTC |
	|         | busybox-5bc68d56bd-pkhcp -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-405494 -- exec                       | multinode-405494     | jenkins | v1.32.0 | 16 Jan 24 02:58 UTC | 16 Jan 24 02:58 UTC |
	|         | busybox-5bc68d56bd-r9bv6 -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-405494 -- get pods -o                | multinode-405494     | jenkins | v1.32.0 | 16 Jan 24 02:58 UTC | 16 Jan 24 02:58 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-405494 -- exec                       | multinode-405494     | jenkins | v1.32.0 | 16 Jan 24 02:58 UTC | 16 Jan 24 02:58 UTC |
	|         | busybox-5bc68d56bd-pkhcp                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-405494 -- exec                       | multinode-405494     | jenkins | v1.32.0 | 16 Jan 24 02:58 UTC |                     |
	|         | busybox-5bc68d56bd-pkhcp -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.39.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-405494 -- exec                       | multinode-405494     | jenkins | v1.32.0 | 16 Jan 24 02:58 UTC | 16 Jan 24 02:58 UTC |
	|         | busybox-5bc68d56bd-r9bv6                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-405494 -- exec                       | multinode-405494     | jenkins | v1.32.0 | 16 Jan 24 02:58 UTC |                     |
	|         | busybox-5bc68d56bd-r9bv6 -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.39.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/16 02:56:26
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0116 02:56:26.246438  487926 out.go:296] Setting OutFile to fd 1 ...
	I0116 02:56:26.246598  487926 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:56:26.246609  487926 out.go:309] Setting ErrFile to fd 2...
	I0116 02:56:26.246617  487926 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:56:26.246836  487926 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17965-468241/.minikube/bin
	I0116 02:56:26.247471  487926 out.go:303] Setting JSON to false
	I0116 02:56:26.248560  487926 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":13138,"bootTime":1705360648,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0116 02:56:26.248637  487926 start.go:138] virtualization: kvm guest
	I0116 02:56:26.251078  487926 out.go:177] * [multinode-405494] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0116 02:56:26.252497  487926 out.go:177]   - MINIKUBE_LOCATION=17965
	I0116 02:56:26.252525  487926 notify.go:220] Checking for updates...
	I0116 02:56:26.253725  487926 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 02:56:26.255235  487926 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17965-468241/kubeconfig
	I0116 02:56:26.256637  487926 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17965-468241/.minikube
	I0116 02:56:26.258064  487926 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0116 02:56:26.259375  487926 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0116 02:56:26.260853  487926 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 02:56:26.297499  487926 out.go:177] * Using the kvm2 driver based on user configuration
	I0116 02:56:26.299064  487926 start.go:298] selected driver: kvm2
	I0116 02:56:26.299086  487926 start.go:902] validating driver "kvm2" against <nil>
	I0116 02:56:26.299099  487926 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0116 02:56:26.299873  487926 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 02:56:26.299954  487926 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17965-468241/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0116 02:56:26.315559  487926 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0116 02:56:26.315620  487926 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0116 02:56:26.315833  487926 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0116 02:56:26.315898  487926 cni.go:84] Creating CNI manager for ""
	I0116 02:56:26.315910  487926 cni.go:136] 0 nodes found, recommending kindnet
	I0116 02:56:26.315920  487926 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0116 02:56:26.315929  487926 start_flags.go:321] config:
	{Name:multinode-405494 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-405494 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 02:56:26.316088  487926 iso.go:125] acquiring lock: {Name:mk2f3231f0eeeb23816ac363851489181398d1c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 02:56:26.318265  487926 out.go:177] * Starting control plane node multinode-405494 in cluster multinode-405494
	I0116 02:56:26.319914  487926 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 02:56:26.319969  487926 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0116 02:56:26.319979  487926 cache.go:56] Caching tarball of preloaded images
	I0116 02:56:26.320117  487926 preload.go:174] Found /home/jenkins/minikube-integration/17965-468241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0116 02:56:26.320131  487926 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0116 02:56:26.320490  487926 profile.go:148] Saving config to /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/multinode-405494/config.json ...
	I0116 02:56:26.320518  487926 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/multinode-405494/config.json: {Name:mk8ab6fb8e782ce2fcab90503de07447e5269de0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:56:26.320676  487926 start.go:365] acquiring machines lock for multinode-405494: {Name:mk901e5fae8c1a578d989b520053c709bdbdcf06 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0116 02:56:26.320704  487926 start.go:369] acquired machines lock for "multinode-405494" in 15.361µs
	I0116 02:56:26.320721  487926 start.go:93] Provisioning new machine with config: &{Name:multinode-405494 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.4 ClusterName:multinode-405494 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 02:56:26.320804  487926 start.go:125] createHost starting for "" (driver="kvm2")
	I0116 02:56:26.322582  487926 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0116 02:56:26.322733  487926 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:56:26.322783  487926 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:56:26.338660  487926 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39721
	I0116 02:56:26.339160  487926 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:56:26.339818  487926 main.go:141] libmachine: Using API Version  1
	I0116 02:56:26.339850  487926 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:56:26.340241  487926 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:56:26.340497  487926 main.go:141] libmachine: (multinode-405494) Calling .GetMachineName
	I0116 02:56:26.340666  487926 main.go:141] libmachine: (multinode-405494) Calling .DriverName
	I0116 02:56:26.340821  487926 start.go:159] libmachine.API.Create for "multinode-405494" (driver="kvm2")
	I0116 02:56:26.340858  487926 client.go:168] LocalClient.Create starting
	I0116 02:56:26.340890  487926 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem
	I0116 02:56:26.340934  487926 main.go:141] libmachine: Decoding PEM data...
	I0116 02:56:26.340952  487926 main.go:141] libmachine: Parsing certificate...
	I0116 02:56:26.341011  487926 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem
	I0116 02:56:26.341028  487926 main.go:141] libmachine: Decoding PEM data...
	I0116 02:56:26.341042  487926 main.go:141] libmachine: Parsing certificate...
	I0116 02:56:26.341060  487926 main.go:141] libmachine: Running pre-create checks...
	I0116 02:56:26.341078  487926 main.go:141] libmachine: (multinode-405494) Calling .PreCreateCheck
	I0116 02:56:26.341521  487926 main.go:141] libmachine: (multinode-405494) Calling .GetConfigRaw
	I0116 02:56:26.341938  487926 main.go:141] libmachine: Creating machine...
	I0116 02:56:26.341953  487926 main.go:141] libmachine: (multinode-405494) Calling .Create
	I0116 02:56:26.342077  487926 main.go:141] libmachine: (multinode-405494) Creating KVM machine...
	I0116 02:56:26.343560  487926 main.go:141] libmachine: (multinode-405494) DBG | found existing default KVM network
	I0116 02:56:26.344474  487926 main.go:141] libmachine: (multinode-405494) DBG | I0116 02:56:26.344321  487948 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000147a60}
	I0116 02:56:26.350317  487926 main.go:141] libmachine: (multinode-405494) DBG | trying to create private KVM network mk-multinode-405494 192.168.39.0/24...
	I0116 02:56:26.425082  487926 main.go:141] libmachine: (multinode-405494) DBG | private KVM network mk-multinode-405494 192.168.39.0/24 created
	I0116 02:56:26.425140  487926 main.go:141] libmachine: (multinode-405494) DBG | I0116 02:56:26.425062  487948 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17965-468241/.minikube
	I0116 02:56:26.425156  487926 main.go:141] libmachine: (multinode-405494) Setting up store path in /home/jenkins/minikube-integration/17965-468241/.minikube/machines/multinode-405494 ...
	I0116 02:56:26.425172  487926 main.go:141] libmachine: (multinode-405494) Building disk image from file:///home/jenkins/minikube-integration/17965-468241/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso
	I0116 02:56:26.425195  487926 main.go:141] libmachine: (multinode-405494) Downloading /home/jenkins/minikube-integration/17965-468241/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17965-468241/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso...
	I0116 02:56:26.656712  487926 main.go:141] libmachine: (multinode-405494) DBG | I0116 02:56:26.656584  487948 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17965-468241/.minikube/machines/multinode-405494/id_rsa...
	I0116 02:56:26.844695  487926 main.go:141] libmachine: (multinode-405494) DBG | I0116 02:56:26.844532  487948 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17965-468241/.minikube/machines/multinode-405494/multinode-405494.rawdisk...
	I0116 02:56:26.844734  487926 main.go:141] libmachine: (multinode-405494) DBG | Writing magic tar header
	I0116 02:56:26.844769  487926 main.go:141] libmachine: (multinode-405494) DBG | Writing SSH key tar header
	I0116 02:56:26.844795  487926 main.go:141] libmachine: (multinode-405494) DBG | I0116 02:56:26.844665  487948 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17965-468241/.minikube/machines/multinode-405494 ...
	I0116 02:56:26.844886  487926 main.go:141] libmachine: (multinode-405494) Setting executable bit set on /home/jenkins/minikube-integration/17965-468241/.minikube/machines/multinode-405494 (perms=drwx------)
	I0116 02:56:26.844929  487926 main.go:141] libmachine: (multinode-405494) Setting executable bit set on /home/jenkins/minikube-integration/17965-468241/.minikube/machines (perms=drwxr-xr-x)
	I0116 02:56:26.844945  487926 main.go:141] libmachine: (multinode-405494) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17965-468241/.minikube/machines/multinode-405494
	I0116 02:56:26.844969  487926 main.go:141] libmachine: (multinode-405494) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17965-468241/.minikube/machines
	I0116 02:56:26.844989  487926 main.go:141] libmachine: (multinode-405494) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17965-468241/.minikube
	I0116 02:56:26.845007  487926 main.go:141] libmachine: (multinode-405494) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17965-468241
	I0116 02:56:26.845024  487926 main.go:141] libmachine: (multinode-405494) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0116 02:56:26.845037  487926 main.go:141] libmachine: (multinode-405494) Setting executable bit set on /home/jenkins/minikube-integration/17965-468241/.minikube (perms=drwxr-xr-x)
	I0116 02:56:26.845050  487926 main.go:141] libmachine: (multinode-405494) Setting executable bit set on /home/jenkins/minikube-integration/17965-468241 (perms=drwxrwxr-x)
	I0116 02:56:26.845061  487926 main.go:141] libmachine: (multinode-405494) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0116 02:56:26.845076  487926 main.go:141] libmachine: (multinode-405494) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0116 02:56:26.845087  487926 main.go:141] libmachine: (multinode-405494) Creating domain...
	I0116 02:56:26.845104  487926 main.go:141] libmachine: (multinode-405494) DBG | Checking permissions on dir: /home/jenkins
	I0116 02:56:26.845124  487926 main.go:141] libmachine: (multinode-405494) DBG | Checking permissions on dir: /home
	I0116 02:56:26.845141  487926 main.go:141] libmachine: (multinode-405494) DBG | Skipping /home - not owner
	I0116 02:56:26.846173  487926 main.go:141] libmachine: (multinode-405494) define libvirt domain using xml: 
	I0116 02:56:26.846198  487926 main.go:141] libmachine: (multinode-405494) <domain type='kvm'>
	I0116 02:56:26.846207  487926 main.go:141] libmachine: (multinode-405494)   <name>multinode-405494</name>
	I0116 02:56:26.846214  487926 main.go:141] libmachine: (multinode-405494)   <memory unit='MiB'>2200</memory>
	I0116 02:56:26.846224  487926 main.go:141] libmachine: (multinode-405494)   <vcpu>2</vcpu>
	I0116 02:56:26.846246  487926 main.go:141] libmachine: (multinode-405494)   <features>
	I0116 02:56:26.846254  487926 main.go:141] libmachine: (multinode-405494)     <acpi/>
	I0116 02:56:26.846263  487926 main.go:141] libmachine: (multinode-405494)     <apic/>
	I0116 02:56:26.846269  487926 main.go:141] libmachine: (multinode-405494)     <pae/>
	I0116 02:56:26.846283  487926 main.go:141] libmachine: (multinode-405494)     
	I0116 02:56:26.846291  487926 main.go:141] libmachine: (multinode-405494)   </features>
	I0116 02:56:26.846300  487926 main.go:141] libmachine: (multinode-405494)   <cpu mode='host-passthrough'>
	I0116 02:56:26.846335  487926 main.go:141] libmachine: (multinode-405494)   
	I0116 02:56:26.846370  487926 main.go:141] libmachine: (multinode-405494)   </cpu>
	I0116 02:56:26.846393  487926 main.go:141] libmachine: (multinode-405494)   <os>
	I0116 02:56:26.846427  487926 main.go:141] libmachine: (multinode-405494)     <type>hvm</type>
	I0116 02:56:26.846439  487926 main.go:141] libmachine: (multinode-405494)     <boot dev='cdrom'/>
	I0116 02:56:26.846446  487926 main.go:141] libmachine: (multinode-405494)     <boot dev='hd'/>
	I0116 02:56:26.846456  487926 main.go:141] libmachine: (multinode-405494)     <bootmenu enable='no'/>
	I0116 02:56:26.846463  487926 main.go:141] libmachine: (multinode-405494)   </os>
	I0116 02:56:26.846474  487926 main.go:141] libmachine: (multinode-405494)   <devices>
	I0116 02:56:26.846488  487926 main.go:141] libmachine: (multinode-405494)     <disk type='file' device='cdrom'>
	I0116 02:56:26.846505  487926 main.go:141] libmachine: (multinode-405494)       <source file='/home/jenkins/minikube-integration/17965-468241/.minikube/machines/multinode-405494/boot2docker.iso'/>
	I0116 02:56:26.846520  487926 main.go:141] libmachine: (multinode-405494)       <target dev='hdc' bus='scsi'/>
	I0116 02:56:26.846528  487926 main.go:141] libmachine: (multinode-405494)       <readonly/>
	I0116 02:56:26.846534  487926 main.go:141] libmachine: (multinode-405494)     </disk>
	I0116 02:56:26.846542  487926 main.go:141] libmachine: (multinode-405494)     <disk type='file' device='disk'>
	I0116 02:56:26.846554  487926 main.go:141] libmachine: (multinode-405494)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0116 02:56:26.846565  487926 main.go:141] libmachine: (multinode-405494)       <source file='/home/jenkins/minikube-integration/17965-468241/.minikube/machines/multinode-405494/multinode-405494.rawdisk'/>
	I0116 02:56:26.846574  487926 main.go:141] libmachine: (multinode-405494)       <target dev='hda' bus='virtio'/>
	I0116 02:56:26.846584  487926 main.go:141] libmachine: (multinode-405494)     </disk>
	I0116 02:56:26.846592  487926 main.go:141] libmachine: (multinode-405494)     <interface type='network'>
	I0116 02:56:26.846598  487926 main.go:141] libmachine: (multinode-405494)       <source network='mk-multinode-405494'/>
	I0116 02:56:26.846617  487926 main.go:141] libmachine: (multinode-405494)       <model type='virtio'/>
	I0116 02:56:26.846632  487926 main.go:141] libmachine: (multinode-405494)     </interface>
	I0116 02:56:26.846644  487926 main.go:141] libmachine: (multinode-405494)     <interface type='network'>
	I0116 02:56:26.846658  487926 main.go:141] libmachine: (multinode-405494)       <source network='default'/>
	I0116 02:56:26.846666  487926 main.go:141] libmachine: (multinode-405494)       <model type='virtio'/>
	I0116 02:56:26.846679  487926 main.go:141] libmachine: (multinode-405494)     </interface>
	I0116 02:56:26.846697  487926 main.go:141] libmachine: (multinode-405494)     <serial type='pty'>
	I0116 02:56:26.846710  487926 main.go:141] libmachine: (multinode-405494)       <target port='0'/>
	I0116 02:56:26.846725  487926 main.go:141] libmachine: (multinode-405494)     </serial>
	I0116 02:56:26.846747  487926 main.go:141] libmachine: (multinode-405494)     <console type='pty'>
	I0116 02:56:26.846767  487926 main.go:141] libmachine: (multinode-405494)       <target type='serial' port='0'/>
	I0116 02:56:26.846789  487926 main.go:141] libmachine: (multinode-405494)     </console>
	I0116 02:56:26.846802  487926 main.go:141] libmachine: (multinode-405494)     <rng model='virtio'>
	I0116 02:56:26.846817  487926 main.go:141] libmachine: (multinode-405494)       <backend model='random'>/dev/random</backend>
	I0116 02:56:26.846830  487926 main.go:141] libmachine: (multinode-405494)     </rng>
	I0116 02:56:26.846842  487926 main.go:141] libmachine: (multinode-405494)     
	I0116 02:56:26.846857  487926 main.go:141] libmachine: (multinode-405494)     
	I0116 02:56:26.846872  487926 main.go:141] libmachine: (multinode-405494)   </devices>
	I0116 02:56:26.846884  487926 main.go:141] libmachine: (multinode-405494) </domain>
	I0116 02:56:26.846901  487926 main.go:141] libmachine: (multinode-405494) 
	I0116 02:56:26.851145  487926 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined MAC address 52:54:00:95:d6:78 in network default
	I0116 02:56:26.851766  487926 main.go:141] libmachine: (multinode-405494) Ensuring networks are active...
	I0116 02:56:26.851793  487926 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 02:56:26.852542  487926 main.go:141] libmachine: (multinode-405494) Ensuring network default is active
	I0116 02:56:26.852902  487926 main.go:141] libmachine: (multinode-405494) Ensuring network mk-multinode-405494 is active
	I0116 02:56:26.853470  487926 main.go:141] libmachine: (multinode-405494) Getting domain xml...
	I0116 02:56:26.854387  487926 main.go:141] libmachine: (multinode-405494) Creating domain...
	I0116 02:56:27.168535  487926 main.go:141] libmachine: (multinode-405494) Waiting to get IP...
	I0116 02:56:27.169407  487926 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 02:56:27.169776  487926 main.go:141] libmachine: (multinode-405494) DBG | unable to find current IP address of domain multinode-405494 in network mk-multinode-405494
	I0116 02:56:27.169824  487926 main.go:141] libmachine: (multinode-405494) DBG | I0116 02:56:27.169767  487948 retry.go:31] will retry after 275.261013ms: waiting for machine to come up
	I0116 02:56:27.446397  487926 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 02:56:27.446948  487926 main.go:141] libmachine: (multinode-405494) DBG | unable to find current IP address of domain multinode-405494 in network mk-multinode-405494
	I0116 02:56:27.446975  487926 main.go:141] libmachine: (multinode-405494) DBG | I0116 02:56:27.446900  487948 retry.go:31] will retry after 294.2457ms: waiting for machine to come up
	I0116 02:56:27.742670  487926 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 02:56:27.743238  487926 main.go:141] libmachine: (multinode-405494) DBG | unable to find current IP address of domain multinode-405494 in network mk-multinode-405494
	I0116 02:56:27.743271  487926 main.go:141] libmachine: (multinode-405494) DBG | I0116 02:56:27.743206  487948 retry.go:31] will retry after 455.069668ms: waiting for machine to come up
	I0116 02:56:28.199951  487926 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 02:56:28.200466  487926 main.go:141] libmachine: (multinode-405494) DBG | unable to find current IP address of domain multinode-405494 in network mk-multinode-405494
	I0116 02:56:28.200517  487926 main.go:141] libmachine: (multinode-405494) DBG | I0116 02:56:28.200387  487948 retry.go:31] will retry after 524.357962ms: waiting for machine to come up
	I0116 02:56:28.726134  487926 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 02:56:28.726631  487926 main.go:141] libmachine: (multinode-405494) DBG | unable to find current IP address of domain multinode-405494 in network mk-multinode-405494
	I0116 02:56:28.726662  487926 main.go:141] libmachine: (multinode-405494) DBG | I0116 02:56:28.726560  487948 retry.go:31] will retry after 487.574135ms: waiting for machine to come up
	I0116 02:56:29.215287  487926 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 02:56:29.215671  487926 main.go:141] libmachine: (multinode-405494) DBG | unable to find current IP address of domain multinode-405494 in network mk-multinode-405494
	I0116 02:56:29.215693  487926 main.go:141] libmachine: (multinode-405494) DBG | I0116 02:56:29.215641  487948 retry.go:31] will retry after 730.855365ms: waiting for machine to come up
	I0116 02:56:29.948808  487926 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 02:56:29.949243  487926 main.go:141] libmachine: (multinode-405494) DBG | unable to find current IP address of domain multinode-405494 in network mk-multinode-405494
	I0116 02:56:29.949280  487926 main.go:141] libmachine: (multinode-405494) DBG | I0116 02:56:29.949206  487948 retry.go:31] will retry after 1.026851636s: waiting for machine to come up
	I0116 02:56:30.977457  487926 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 02:56:30.977887  487926 main.go:141] libmachine: (multinode-405494) DBG | unable to find current IP address of domain multinode-405494 in network mk-multinode-405494
	I0116 02:56:30.977922  487926 main.go:141] libmachine: (multinode-405494) DBG | I0116 02:56:30.977821  487948 retry.go:31] will retry after 1.043312991s: waiting for machine to come up
	I0116 02:56:32.023064  487926 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 02:56:32.023679  487926 main.go:141] libmachine: (multinode-405494) DBG | unable to find current IP address of domain multinode-405494 in network mk-multinode-405494
	I0116 02:56:32.023798  487926 main.go:141] libmachine: (multinode-405494) DBG | I0116 02:56:32.023594  487948 retry.go:31] will retry after 1.605821623s: waiting for machine to come up
	I0116 02:56:33.631715  487926 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 02:56:33.632243  487926 main.go:141] libmachine: (multinode-405494) DBG | unable to find current IP address of domain multinode-405494 in network mk-multinode-405494
	I0116 02:56:33.632275  487926 main.go:141] libmachine: (multinode-405494) DBG | I0116 02:56:33.632182  487948 retry.go:31] will retry after 1.756116268s: waiting for machine to come up
	I0116 02:56:35.391211  487926 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 02:56:35.391679  487926 main.go:141] libmachine: (multinode-405494) DBG | unable to find current IP address of domain multinode-405494 in network mk-multinode-405494
	I0116 02:56:35.391712  487926 main.go:141] libmachine: (multinode-405494) DBG | I0116 02:56:35.391643  487948 retry.go:31] will retry after 2.365023509s: waiting for machine to come up
	I0116 02:56:37.758876  487926 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 02:56:37.759301  487926 main.go:141] libmachine: (multinode-405494) DBG | unable to find current IP address of domain multinode-405494 in network mk-multinode-405494
	I0116 02:56:37.759334  487926 main.go:141] libmachine: (multinode-405494) DBG | I0116 02:56:37.759243  487948 retry.go:31] will retry after 3.549732688s: waiting for machine to come up
	I0116 02:56:41.313146  487926 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 02:56:41.313539  487926 main.go:141] libmachine: (multinode-405494) DBG | unable to find current IP address of domain multinode-405494 in network mk-multinode-405494
	I0116 02:56:41.313568  487926 main.go:141] libmachine: (multinode-405494) DBG | I0116 02:56:41.313494  487948 retry.go:31] will retry after 2.882419127s: waiting for machine to come up
	I0116 02:56:44.197311  487926 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 02:56:44.197694  487926 main.go:141] libmachine: (multinode-405494) DBG | unable to find current IP address of domain multinode-405494 in network mk-multinode-405494
	I0116 02:56:44.197714  487926 main.go:141] libmachine: (multinode-405494) DBG | I0116 02:56:44.197676  487948 retry.go:31] will retry after 5.457195712s: waiting for machine to come up
	I0116 02:56:49.658461  487926 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 02:56:49.658921  487926 main.go:141] libmachine: (multinode-405494) Found IP for machine: 192.168.39.70
	I0116 02:56:49.658951  487926 main.go:141] libmachine: (multinode-405494) Reserving static IP address...
	I0116 02:56:49.658967  487926 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has current primary IP address 192.168.39.70 and MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 02:56:49.659458  487926 main.go:141] libmachine: (multinode-405494) DBG | unable to find host DHCP lease matching {name: "multinode-405494", mac: "52:54:00:b0:49:7b", ip: "192.168.39.70"} in network mk-multinode-405494
	I0116 02:56:49.737875  487926 main.go:141] libmachine: (multinode-405494) Reserved static IP address: 192.168.39.70
	I0116 02:56:49.737907  487926 main.go:141] libmachine: (multinode-405494) Waiting for SSH to be available...
	I0116 02:56:49.737914  487926 main.go:141] libmachine: (multinode-405494) DBG | Getting to WaitForSSH function...
	I0116 02:56:49.740643  487926 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 02:56:49.741036  487926 main.go:141] libmachine: (multinode-405494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:49:7b", ip: ""} in network mk-multinode-405494: {Iface:virbr1 ExpiryTime:2024-01-16 03:56:41 +0000 UTC Type:0 Mac:52:54:00:b0:49:7b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b0:49:7b}
	I0116 02:56:49.741072  487926 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined IP address 192.168.39.70 and MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 02:56:49.741173  487926 main.go:141] libmachine: (multinode-405494) DBG | Using SSH client type: external
	I0116 02:56:49.741195  487926 main.go:141] libmachine: (multinode-405494) DBG | Using SSH private key: /home/jenkins/minikube-integration/17965-468241/.minikube/machines/multinode-405494/id_rsa (-rw-------)
	I0116 02:56:49.741237  487926 main.go:141] libmachine: (multinode-405494) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.70 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17965-468241/.minikube/machines/multinode-405494/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0116 02:56:49.741255  487926 main.go:141] libmachine: (multinode-405494) DBG | About to run SSH command:
	I0116 02:56:49.741264  487926 main.go:141] libmachine: (multinode-405494) DBG | exit 0
	I0116 02:56:49.828568  487926 main.go:141] libmachine: (multinode-405494) DBG | SSH cmd err, output: <nil>: 
	I0116 02:56:49.828810  487926 main.go:141] libmachine: (multinode-405494) KVM machine creation complete!
	I0116 02:56:49.829210  487926 main.go:141] libmachine: (multinode-405494) Calling .GetConfigRaw
	I0116 02:56:49.830992  487926 main.go:141] libmachine: (multinode-405494) Calling .DriverName
	I0116 02:56:49.831219  487926 main.go:141] libmachine: (multinode-405494) Calling .DriverName
	I0116 02:56:49.831388  487926 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0116 02:56:49.831407  487926 main.go:141] libmachine: (multinode-405494) Calling .GetState
	I0116 02:56:49.832826  487926 main.go:141] libmachine: Detecting operating system of created instance...
	I0116 02:56:49.832842  487926 main.go:141] libmachine: Waiting for SSH to be available...
	I0116 02:56:49.832848  487926 main.go:141] libmachine: Getting to WaitForSSH function...
	I0116 02:56:49.832855  487926 main.go:141] libmachine: (multinode-405494) Calling .GetSSHHostname
	I0116 02:56:49.835736  487926 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 02:56:49.836093  487926 main.go:141] libmachine: (multinode-405494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:49:7b", ip: ""} in network mk-multinode-405494: {Iface:virbr1 ExpiryTime:2024-01-16 03:56:41 +0000 UTC Type:0 Mac:52:54:00:b0:49:7b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:multinode-405494 Clientid:01:52:54:00:b0:49:7b}
	I0116 02:56:49.836128  487926 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined IP address 192.168.39.70 and MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 02:56:49.836370  487926 main.go:141] libmachine: (multinode-405494) Calling .GetSSHPort
	I0116 02:56:49.836621  487926 main.go:141] libmachine: (multinode-405494) Calling .GetSSHKeyPath
	I0116 02:56:49.836816  487926 main.go:141] libmachine: (multinode-405494) Calling .GetSSHKeyPath
	I0116 02:56:49.836940  487926 main.go:141] libmachine: (multinode-405494) Calling .GetSSHUsername
	I0116 02:56:49.837165  487926 main.go:141] libmachine: Using SSH client type: native
	I0116 02:56:49.837530  487926 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I0116 02:56:49.837544  487926 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0116 02:56:49.951762  487926 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 02:56:49.951790  487926 main.go:141] libmachine: Detecting the provisioner...
	I0116 02:56:49.951798  487926 main.go:141] libmachine: (multinode-405494) Calling .GetSSHHostname
	I0116 02:56:49.954644  487926 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 02:56:49.955061  487926 main.go:141] libmachine: (multinode-405494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:49:7b", ip: ""} in network mk-multinode-405494: {Iface:virbr1 ExpiryTime:2024-01-16 03:56:41 +0000 UTC Type:0 Mac:52:54:00:b0:49:7b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:multinode-405494 Clientid:01:52:54:00:b0:49:7b}
	I0116 02:56:49.955088  487926 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined IP address 192.168.39.70 and MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 02:56:49.955278  487926 main.go:141] libmachine: (multinode-405494) Calling .GetSSHPort
	I0116 02:56:49.955539  487926 main.go:141] libmachine: (multinode-405494) Calling .GetSSHKeyPath
	I0116 02:56:49.955742  487926 main.go:141] libmachine: (multinode-405494) Calling .GetSSHKeyPath
	I0116 02:56:49.955886  487926 main.go:141] libmachine: (multinode-405494) Calling .GetSSHUsername
	I0116 02:56:49.956097  487926 main.go:141] libmachine: Using SSH client type: native
	I0116 02:56:49.956530  487926 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I0116 02:56:49.956552  487926 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0116 02:56:50.073024  487926 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g19d536a-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0116 02:56:50.073132  487926 main.go:141] libmachine: found compatible host: buildroot
	I0116 02:56:50.073154  487926 main.go:141] libmachine: Provisioning with buildroot...
	I0116 02:56:50.073167  487926 main.go:141] libmachine: (multinode-405494) Calling .GetMachineName
	I0116 02:56:50.073532  487926 buildroot.go:166] provisioning hostname "multinode-405494"
	I0116 02:56:50.073559  487926 main.go:141] libmachine: (multinode-405494) Calling .GetMachineName
	I0116 02:56:50.073757  487926 main.go:141] libmachine: (multinode-405494) Calling .GetSSHHostname
	I0116 02:56:50.076510  487926 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 02:56:50.076856  487926 main.go:141] libmachine: (multinode-405494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:49:7b", ip: ""} in network mk-multinode-405494: {Iface:virbr1 ExpiryTime:2024-01-16 03:56:41 +0000 UTC Type:0 Mac:52:54:00:b0:49:7b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:multinode-405494 Clientid:01:52:54:00:b0:49:7b}
	I0116 02:56:50.076881  487926 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined IP address 192.168.39.70 and MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 02:56:50.077123  487926 main.go:141] libmachine: (multinode-405494) Calling .GetSSHPort
	I0116 02:56:50.077368  487926 main.go:141] libmachine: (multinode-405494) Calling .GetSSHKeyPath
	I0116 02:56:50.077564  487926 main.go:141] libmachine: (multinode-405494) Calling .GetSSHKeyPath
	I0116 02:56:50.077700  487926 main.go:141] libmachine: (multinode-405494) Calling .GetSSHUsername
	I0116 02:56:50.077888  487926 main.go:141] libmachine: Using SSH client type: native
	I0116 02:56:50.078213  487926 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I0116 02:56:50.078228  487926 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-405494 && echo "multinode-405494" | sudo tee /etc/hostname
	I0116 02:56:50.205756  487926 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-405494
	
	I0116 02:56:50.205786  487926 main.go:141] libmachine: (multinode-405494) Calling .GetSSHHostname
	I0116 02:56:50.208572  487926 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 02:56:50.209048  487926 main.go:141] libmachine: (multinode-405494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:49:7b", ip: ""} in network mk-multinode-405494: {Iface:virbr1 ExpiryTime:2024-01-16 03:56:41 +0000 UTC Type:0 Mac:52:54:00:b0:49:7b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:multinode-405494 Clientid:01:52:54:00:b0:49:7b}
	I0116 02:56:50.209086  487926 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined IP address 192.168.39.70 and MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 02:56:50.209257  487926 main.go:141] libmachine: (multinode-405494) Calling .GetSSHPort
	I0116 02:56:50.209524  487926 main.go:141] libmachine: (multinode-405494) Calling .GetSSHKeyPath
	I0116 02:56:50.209742  487926 main.go:141] libmachine: (multinode-405494) Calling .GetSSHKeyPath
	I0116 02:56:50.209924  487926 main.go:141] libmachine: (multinode-405494) Calling .GetSSHUsername
	I0116 02:56:50.210161  487926 main.go:141] libmachine: Using SSH client type: native
	I0116 02:56:50.210481  487926 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I0116 02:56:50.210498  487926 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-405494' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-405494/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-405494' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 02:56:50.332945  487926 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 02:56:50.332985  487926 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17965-468241/.minikube CaCertPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17965-468241/.minikube}
	I0116 02:56:50.333048  487926 buildroot.go:174] setting up certificates
	I0116 02:56:50.333064  487926 provision.go:83] configureAuth start
	I0116 02:56:50.333080  487926 main.go:141] libmachine: (multinode-405494) Calling .GetMachineName
	I0116 02:56:50.333405  487926 main.go:141] libmachine: (multinode-405494) Calling .GetIP
	I0116 02:56:50.336176  487926 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 02:56:50.336587  487926 main.go:141] libmachine: (multinode-405494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:49:7b", ip: ""} in network mk-multinode-405494: {Iface:virbr1 ExpiryTime:2024-01-16 03:56:41 +0000 UTC Type:0 Mac:52:54:00:b0:49:7b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:multinode-405494 Clientid:01:52:54:00:b0:49:7b}
	I0116 02:56:50.336619  487926 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined IP address 192.168.39.70 and MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 02:56:50.336874  487926 main.go:141] libmachine: (multinode-405494) Calling .GetSSHHostname
	I0116 02:56:50.339175  487926 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 02:56:50.339521  487926 main.go:141] libmachine: (multinode-405494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:49:7b", ip: ""} in network mk-multinode-405494: {Iface:virbr1 ExpiryTime:2024-01-16 03:56:41 +0000 UTC Type:0 Mac:52:54:00:b0:49:7b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:multinode-405494 Clientid:01:52:54:00:b0:49:7b}
	I0116 02:56:50.339560  487926 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined IP address 192.168.39.70 and MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 02:56:50.339699  487926 provision.go:138] copyHostCerts
	I0116 02:56:50.339765  487926 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem
	I0116 02:56:50.339814  487926 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem, removing ...
	I0116 02:56:50.339827  487926 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem
	I0116 02:56:50.339895  487926 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem (1078 bytes)
	I0116 02:56:50.340026  487926 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem
	I0116 02:56:50.340072  487926 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem, removing ...
	I0116 02:56:50.340083  487926 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem
	I0116 02:56:50.340117  487926 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem (1123 bytes)
	I0116 02:56:50.340205  487926 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem
	I0116 02:56:50.340240  487926 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem, removing ...
	I0116 02:56:50.340250  487926 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem
	I0116 02:56:50.340279  487926 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem (1679 bytes)
	I0116 02:56:50.340341  487926 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca-key.pem org=jenkins.multinode-405494 san=[192.168.39.70 192.168.39.70 localhost 127.0.0.1 minikube multinode-405494]
	I0116 02:56:50.505445  487926 provision.go:172] copyRemoteCerts
	I0116 02:56:50.505535  487926 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 02:56:50.505570  487926 main.go:141] libmachine: (multinode-405494) Calling .GetSSHHostname
	I0116 02:56:50.508414  487926 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 02:56:50.508748  487926 main.go:141] libmachine: (multinode-405494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:49:7b", ip: ""} in network mk-multinode-405494: {Iface:virbr1 ExpiryTime:2024-01-16 03:56:41 +0000 UTC Type:0 Mac:52:54:00:b0:49:7b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:multinode-405494 Clientid:01:52:54:00:b0:49:7b}
	I0116 02:56:50.508786  487926 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined IP address 192.168.39.70 and MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 02:56:50.508937  487926 main.go:141] libmachine: (multinode-405494) Calling .GetSSHPort
	I0116 02:56:50.509131  487926 main.go:141] libmachine: (multinode-405494) Calling .GetSSHKeyPath
	I0116 02:56:50.509318  487926 main.go:141] libmachine: (multinode-405494) Calling .GetSSHUsername
	I0116 02:56:50.509484  487926 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/multinode-405494/id_rsa Username:docker}
	I0116 02:56:50.597646  487926 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0116 02:56:50.597730  487926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0116 02:56:50.621408  487926 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0116 02:56:50.621511  487926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0116 02:56:50.645002  487926 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0116 02:56:50.645097  487926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0116 02:56:50.668204  487926 provision.go:86] duration metric: configureAuth took 335.120837ms
	I0116 02:56:50.668246  487926 buildroot.go:189] setting minikube options for container-runtime
	I0116 02:56:50.668458  487926 config.go:182] Loaded profile config "multinode-405494": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 02:56:50.668565  487926 main.go:141] libmachine: (multinode-405494) Calling .GetSSHHostname
	I0116 02:56:50.671375  487926 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 02:56:50.671751  487926 main.go:141] libmachine: (multinode-405494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:49:7b", ip: ""} in network mk-multinode-405494: {Iface:virbr1 ExpiryTime:2024-01-16 03:56:41 +0000 UTC Type:0 Mac:52:54:00:b0:49:7b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:multinode-405494 Clientid:01:52:54:00:b0:49:7b}
	I0116 02:56:50.671800  487926 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined IP address 192.168.39.70 and MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 02:56:50.671997  487926 main.go:141] libmachine: (multinode-405494) Calling .GetSSHPort
	I0116 02:56:50.672194  487926 main.go:141] libmachine: (multinode-405494) Calling .GetSSHKeyPath
	I0116 02:56:50.672375  487926 main.go:141] libmachine: (multinode-405494) Calling .GetSSHKeyPath
	I0116 02:56:50.672482  487926 main.go:141] libmachine: (multinode-405494) Calling .GetSSHUsername
	I0116 02:56:50.672692  487926 main.go:141] libmachine: Using SSH client type: native
	I0116 02:56:50.673030  487926 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I0116 02:56:50.673047  487926 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 02:56:50.970628  487926 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 02:56:50.970661  487926 main.go:141] libmachine: Checking connection to Docker...
	I0116 02:56:50.970670  487926 main.go:141] libmachine: (multinode-405494) Calling .GetURL
	I0116 02:56:50.972076  487926 main.go:141] libmachine: (multinode-405494) DBG | Using libvirt version 6000000
	I0116 02:56:50.974317  487926 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 02:56:50.974641  487926 main.go:141] libmachine: (multinode-405494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:49:7b", ip: ""} in network mk-multinode-405494: {Iface:virbr1 ExpiryTime:2024-01-16 03:56:41 +0000 UTC Type:0 Mac:52:54:00:b0:49:7b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:multinode-405494 Clientid:01:52:54:00:b0:49:7b}
	I0116 02:56:50.974672  487926 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined IP address 192.168.39.70 and MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 02:56:50.974831  487926 main.go:141] libmachine: Docker is up and running!
	I0116 02:56:50.974848  487926 main.go:141] libmachine: Reticulating splines...
	I0116 02:56:50.974856  487926 client.go:171] LocalClient.Create took 24.633987993s
	I0116 02:56:50.974886  487926 start.go:167] duration metric: libmachine.API.Create for "multinode-405494" took 24.634064892s
	I0116 02:56:50.974899  487926 start.go:300] post-start starting for "multinode-405494" (driver="kvm2")
	I0116 02:56:50.974913  487926 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 02:56:50.974936  487926 main.go:141] libmachine: (multinode-405494) Calling .DriverName
	I0116 02:56:50.975212  487926 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 02:56:50.975250  487926 main.go:141] libmachine: (multinode-405494) Calling .GetSSHHostname
	I0116 02:56:50.977422  487926 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 02:56:50.977729  487926 main.go:141] libmachine: (multinode-405494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:49:7b", ip: ""} in network mk-multinode-405494: {Iface:virbr1 ExpiryTime:2024-01-16 03:56:41 +0000 UTC Type:0 Mac:52:54:00:b0:49:7b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:multinode-405494 Clientid:01:52:54:00:b0:49:7b}
	I0116 02:56:50.977760  487926 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined IP address 192.168.39.70 and MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 02:56:50.977867  487926 main.go:141] libmachine: (multinode-405494) Calling .GetSSHPort
	I0116 02:56:50.978070  487926 main.go:141] libmachine: (multinode-405494) Calling .GetSSHKeyPath
	I0116 02:56:50.978236  487926 main.go:141] libmachine: (multinode-405494) Calling .GetSSHUsername
	I0116 02:56:50.978382  487926 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/multinode-405494/id_rsa Username:docker}
	I0116 02:56:51.065646  487926 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 02:56:51.069562  487926 command_runner.go:130] > NAME=Buildroot
	I0116 02:56:51.069588  487926 command_runner.go:130] > VERSION=2021.02.12-1-g19d536a-dirty
	I0116 02:56:51.069595  487926 command_runner.go:130] > ID=buildroot
	I0116 02:56:51.069603  487926 command_runner.go:130] > VERSION_ID=2021.02.12
	I0116 02:56:51.069610  487926 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0116 02:56:51.069703  487926 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 02:56:51.069729  487926 filesync.go:126] Scanning /home/jenkins/minikube-integration/17965-468241/.minikube/addons for local assets ...
	I0116 02:56:51.069803  487926 filesync.go:126] Scanning /home/jenkins/minikube-integration/17965-468241/.minikube/files for local assets ...
	I0116 02:56:51.069909  487926 filesync.go:149] local asset: /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem -> 4754782.pem in /etc/ssl/certs
	I0116 02:56:51.069928  487926 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem -> /etc/ssl/certs/4754782.pem
	I0116 02:56:51.070060  487926 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 02:56:51.078754  487926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem --> /etc/ssl/certs/4754782.pem (1708 bytes)
	I0116 02:56:51.103551  487926 start.go:303] post-start completed in 128.6331ms
	I0116 02:56:51.103637  487926 main.go:141] libmachine: (multinode-405494) Calling .GetConfigRaw
	I0116 02:56:51.104326  487926 main.go:141] libmachine: (multinode-405494) Calling .GetIP
	I0116 02:56:51.107022  487926 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 02:56:51.107475  487926 main.go:141] libmachine: (multinode-405494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:49:7b", ip: ""} in network mk-multinode-405494: {Iface:virbr1 ExpiryTime:2024-01-16 03:56:41 +0000 UTC Type:0 Mac:52:54:00:b0:49:7b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:multinode-405494 Clientid:01:52:54:00:b0:49:7b}
	I0116 02:56:51.107510  487926 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined IP address 192.168.39.70 and MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 02:56:51.107833  487926 profile.go:148] Saving config to /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/multinode-405494/config.json ...
	I0116 02:56:51.108099  487926 start.go:128] duration metric: createHost completed in 24.787281233s
	I0116 02:56:51.108132  487926 main.go:141] libmachine: (multinode-405494) Calling .GetSSHHostname
	I0116 02:56:51.110527  487926 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 02:56:51.110900  487926 main.go:141] libmachine: (multinode-405494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:49:7b", ip: ""} in network mk-multinode-405494: {Iface:virbr1 ExpiryTime:2024-01-16 03:56:41 +0000 UTC Type:0 Mac:52:54:00:b0:49:7b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:multinode-405494 Clientid:01:52:54:00:b0:49:7b}
	I0116 02:56:51.110924  487926 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined IP address 192.168.39.70 and MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 02:56:51.111095  487926 main.go:141] libmachine: (multinode-405494) Calling .GetSSHPort
	I0116 02:56:51.111326  487926 main.go:141] libmachine: (multinode-405494) Calling .GetSSHKeyPath
	I0116 02:56:51.111522  487926 main.go:141] libmachine: (multinode-405494) Calling .GetSSHKeyPath
	I0116 02:56:51.111658  487926 main.go:141] libmachine: (multinode-405494) Calling .GetSSHUsername
	I0116 02:56:51.111846  487926 main.go:141] libmachine: Using SSH client type: native
	I0116 02:56:51.112201  487926 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I0116 02:56:51.112214  487926 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0116 02:56:51.229087  487926 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705373811.212068830
	
	I0116 02:56:51.229118  487926 fix.go:206] guest clock: 1705373811.212068830
	I0116 02:56:51.229125  487926 fix.go:219] Guest: 2024-01-16 02:56:51.21206883 +0000 UTC Remote: 2024-01-16 02:56:51.108116987 +0000 UTC m=+24.913193746 (delta=103.951843ms)
	I0116 02:56:51.229148  487926 fix.go:190] guest clock delta is within tolerance: 103.951843ms
	I0116 02:56:51.229155  487926 start.go:83] releasing machines lock for "multinode-405494", held for 24.908441806s
	I0116 02:56:51.229180  487926 main.go:141] libmachine: (multinode-405494) Calling .DriverName
	I0116 02:56:51.229474  487926 main.go:141] libmachine: (multinode-405494) Calling .GetIP
	I0116 02:56:51.232224  487926 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 02:56:51.232535  487926 main.go:141] libmachine: (multinode-405494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:49:7b", ip: ""} in network mk-multinode-405494: {Iface:virbr1 ExpiryTime:2024-01-16 03:56:41 +0000 UTC Type:0 Mac:52:54:00:b0:49:7b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:multinode-405494 Clientid:01:52:54:00:b0:49:7b}
	I0116 02:56:51.232566  487926 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined IP address 192.168.39.70 and MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 02:56:51.232835  487926 main.go:141] libmachine: (multinode-405494) Calling .DriverName
	I0116 02:56:51.233404  487926 main.go:141] libmachine: (multinode-405494) Calling .DriverName
	I0116 02:56:51.233625  487926 main.go:141] libmachine: (multinode-405494) Calling .DriverName
	I0116 02:56:51.233745  487926 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 02:56:51.233802  487926 main.go:141] libmachine: (multinode-405494) Calling .GetSSHHostname
	I0116 02:56:51.233890  487926 ssh_runner.go:195] Run: cat /version.json
	I0116 02:56:51.233919  487926 main.go:141] libmachine: (multinode-405494) Calling .GetSSHHostname
	I0116 02:56:51.236502  487926 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 02:56:51.236624  487926 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 02:56:51.236896  487926 main.go:141] libmachine: (multinode-405494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:49:7b", ip: ""} in network mk-multinode-405494: {Iface:virbr1 ExpiryTime:2024-01-16 03:56:41 +0000 UTC Type:0 Mac:52:54:00:b0:49:7b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:multinode-405494 Clientid:01:52:54:00:b0:49:7b}
	I0116 02:56:51.236926  487926 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined IP address 192.168.39.70 and MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 02:56:51.237055  487926 main.go:141] libmachine: (multinode-405494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:49:7b", ip: ""} in network mk-multinode-405494: {Iface:virbr1 ExpiryTime:2024-01-16 03:56:41 +0000 UTC Type:0 Mac:52:54:00:b0:49:7b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:multinode-405494 Clientid:01:52:54:00:b0:49:7b}
	I0116 02:56:51.237080  487926 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined IP address 192.168.39.70 and MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 02:56:51.237095  487926 main.go:141] libmachine: (multinode-405494) Calling .GetSSHPort
	I0116 02:56:51.237245  487926 main.go:141] libmachine: (multinode-405494) Calling .GetSSHPort
	I0116 02:56:51.237321  487926 main.go:141] libmachine: (multinode-405494) Calling .GetSSHKeyPath
	I0116 02:56:51.237406  487926 main.go:141] libmachine: (multinode-405494) Calling .GetSSHKeyPath
	I0116 02:56:51.237503  487926 main.go:141] libmachine: (multinode-405494) Calling .GetSSHUsername
	I0116 02:56:51.237592  487926 main.go:141] libmachine: (multinode-405494) Calling .GetSSHUsername
	I0116 02:56:51.237757  487926 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/multinode-405494/id_rsa Username:docker}
	I0116 02:56:51.237760  487926 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/multinode-405494/id_rsa Username:docker}
	I0116 02:56:51.354456  487926 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0116 02:56:51.354592  487926 command_runner.go:130] > {"iso_version": "v1.32.1-1703784139-17866", "kicbase_version": "v0.0.42-1703723663-17866", "minikube_version": "v1.32.0", "commit": "eb69424d8f623d7cabea57d4395ce87adf1b5fc3"}
	I0116 02:56:51.354767  487926 ssh_runner.go:195] Run: systemctl --version
	I0116 02:56:51.361020  487926 command_runner.go:130] > systemd 247 (247)
	I0116 02:56:51.361075  487926 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I0116 02:56:51.361150  487926 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 02:56:51.527150  487926 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0116 02:56:51.533272  487926 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0116 02:56:51.533326  487926 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 02:56:51.533391  487926 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 02:56:51.551637  487926 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0116 02:56:51.551686  487926 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0116 02:56:51.551696  487926 start.go:475] detecting cgroup driver to use...
	I0116 02:56:51.551781  487926 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 02:56:51.567447  487926 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 02:56:51.582448  487926 docker.go:217] disabling cri-docker service (if available) ...
	I0116 02:56:51.582528  487926 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 02:56:51.597558  487926 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 02:56:51.612533  487926 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 02:56:51.722847  487926 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I0116 02:56:51.722955  487926 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 02:56:51.736572  487926 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0116 02:56:51.834952  487926 docker.go:233] disabling docker service ...
	I0116 02:56:51.835054  487926 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 02:56:51.849843  487926 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 02:56:51.862465  487926 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I0116 02:56:51.862816  487926 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 02:56:51.978737  487926 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0116 02:56:51.978834  487926 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 02:56:51.992073  487926 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I0116 02:56:51.992449  487926 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0116 02:56:52.089210  487926 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 02:56:52.102772  487926 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 02:56:52.122982  487926 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0116 02:56:52.123423  487926 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0116 02:56:52.123495  487926 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 02:56:52.134087  487926 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 02:56:52.134154  487926 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 02:56:52.145168  487926 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 02:56:52.156197  487926 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 02:56:52.166984  487926 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 02:56:52.178429  487926 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 02:56:52.188349  487926 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0116 02:56:52.188413  487926 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0116 02:56:52.188481  487926 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0116 02:56:52.203696  487926 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 02:56:52.213561  487926 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 02:56:52.332430  487926 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 02:56:52.502007  487926 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 02:56:52.502103  487926 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 02:56:52.507060  487926 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0116 02:56:52.507102  487926 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0116 02:56:52.507114  487926 command_runner.go:130] > Device: 16h/22d	Inode: 806         Links: 1
	I0116 02:56:52.507125  487926 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0116 02:56:52.507133  487926 command_runner.go:130] > Access: 2024-01-16 02:56:52.470411376 +0000
	I0116 02:56:52.507142  487926 command_runner.go:130] > Modify: 2024-01-16 02:56:52.470411376 +0000
	I0116 02:56:52.507150  487926 command_runner.go:130] > Change: 2024-01-16 02:56:52.470411376 +0000
	I0116 02:56:52.507162  487926 command_runner.go:130] >  Birth: -
	I0116 02:56:52.507195  487926 start.go:543] Will wait 60s for crictl version
	I0116 02:56:52.507246  487926 ssh_runner.go:195] Run: which crictl
	I0116 02:56:52.511630  487926 command_runner.go:130] > /usr/bin/crictl
	I0116 02:56:52.511897  487926 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 02:56:52.550694  487926 command_runner.go:130] > Version:  0.1.0
	I0116 02:56:52.550725  487926 command_runner.go:130] > RuntimeName:  cri-o
	I0116 02:56:52.550748  487926 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0116 02:56:52.550756  487926 command_runner.go:130] > RuntimeApiVersion:  v1
	I0116 02:56:52.550858  487926 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0116 02:56:52.550958  487926 ssh_runner.go:195] Run: crio --version
	I0116 02:56:52.598613  487926 command_runner.go:130] > crio version 1.24.1
	I0116 02:56:52.598637  487926 command_runner.go:130] > Version:          1.24.1
	I0116 02:56:52.598644  487926 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0116 02:56:52.598654  487926 command_runner.go:130] > GitTreeState:     dirty
	I0116 02:56:52.598664  487926 command_runner.go:130] > BuildDate:        2023-12-28T22:46:29Z
	I0116 02:56:52.598671  487926 command_runner.go:130] > GoVersion:        go1.19.9
	I0116 02:56:52.598678  487926 command_runner.go:130] > Compiler:         gc
	I0116 02:56:52.598686  487926 command_runner.go:130] > Platform:         linux/amd64
	I0116 02:56:52.598697  487926 command_runner.go:130] > Linkmode:         dynamic
	I0116 02:56:52.598706  487926 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0116 02:56:52.598714  487926 command_runner.go:130] > SeccompEnabled:   true
	I0116 02:56:52.598717  487926 command_runner.go:130] > AppArmorEnabled:  false
	I0116 02:56:52.600111  487926 ssh_runner.go:195] Run: crio --version
	I0116 02:56:52.654170  487926 command_runner.go:130] > crio version 1.24.1
	I0116 02:56:52.654203  487926 command_runner.go:130] > Version:          1.24.1
	I0116 02:56:52.654215  487926 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0116 02:56:52.654223  487926 command_runner.go:130] > GitTreeState:     dirty
	I0116 02:56:52.654232  487926 command_runner.go:130] > BuildDate:        2023-12-28T22:46:29Z
	I0116 02:56:52.654241  487926 command_runner.go:130] > GoVersion:        go1.19.9
	I0116 02:56:52.654248  487926 command_runner.go:130] > Compiler:         gc
	I0116 02:56:52.654257  487926 command_runner.go:130] > Platform:         linux/amd64
	I0116 02:56:52.654271  487926 command_runner.go:130] > Linkmode:         dynamic
	I0116 02:56:52.654279  487926 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0116 02:56:52.654287  487926 command_runner.go:130] > SeccompEnabled:   true
	I0116 02:56:52.654295  487926 command_runner.go:130] > AppArmorEnabled:  false
	I0116 02:56:52.656752  487926 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0116 02:56:52.658403  487926 main.go:141] libmachine: (multinode-405494) Calling .GetIP
	I0116 02:56:52.661261  487926 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 02:56:52.661628  487926 main.go:141] libmachine: (multinode-405494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:49:7b", ip: ""} in network mk-multinode-405494: {Iface:virbr1 ExpiryTime:2024-01-16 03:56:41 +0000 UTC Type:0 Mac:52:54:00:b0:49:7b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:multinode-405494 Clientid:01:52:54:00:b0:49:7b}
	I0116 02:56:52.661679  487926 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined IP address 192.168.39.70 and MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 02:56:52.661869  487926 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0116 02:56:52.666645  487926 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 02:56:52.680821  487926 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 02:56:52.680893  487926 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 02:56:52.715517  487926 command_runner.go:130] > {
	I0116 02:56:52.715551  487926 command_runner.go:130] >   "images": [
	I0116 02:56:52.715555  487926 command_runner.go:130] >   ]
	I0116 02:56:52.715559  487926 command_runner.go:130] > }
	I0116 02:56:52.716926  487926 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0116 02:56:52.717008  487926 ssh_runner.go:195] Run: which lz4
	I0116 02:56:52.721267  487926 command_runner.go:130] > /usr/bin/lz4
	I0116 02:56:52.721316  487926 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0116 02:56:52.721433  487926 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0116 02:56:52.725896  487926 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0116 02:56:52.726003  487926 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0116 02:56:52.726037  487926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0116 02:56:54.532634  487926 crio.go:444] Took 1.811247 seconds to copy over tarball
	I0116 02:56:54.532712  487926 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0116 02:56:57.460832  487926 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.928084694s)
	I0116 02:56:57.460877  487926 crio.go:451] Took 2.928216 seconds to extract the tarball
	I0116 02:56:57.460890  487926 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0116 02:56:57.507057  487926 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 02:56:57.588522  487926 command_runner.go:130] > {
	I0116 02:56:57.588565  487926 command_runner.go:130] >   "images": [
	I0116 02:56:57.588571  487926 command_runner.go:130] >     {
	I0116 02:56:57.588584  487926 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I0116 02:56:57.588592  487926 command_runner.go:130] >       "repoTags": [
	I0116 02:56:57.588602  487926 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I0116 02:56:57.588608  487926 command_runner.go:130] >       ],
	I0116 02:56:57.588612  487926 command_runner.go:130] >       "repoDigests": [
	I0116 02:56:57.588625  487926 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I0116 02:56:57.588639  487926 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I0116 02:56:57.588643  487926 command_runner.go:130] >       ],
	I0116 02:56:57.588647  487926 command_runner.go:130] >       "size": "65258016",
	I0116 02:56:57.588651  487926 command_runner.go:130] >       "uid": null,
	I0116 02:56:57.588656  487926 command_runner.go:130] >       "username": "",
	I0116 02:56:57.588665  487926 command_runner.go:130] >       "spec": null,
	I0116 02:56:57.588672  487926 command_runner.go:130] >       "pinned": false
	I0116 02:56:57.588676  487926 command_runner.go:130] >     },
	I0116 02:56:57.588679  487926 command_runner.go:130] >     {
	I0116 02:56:57.588685  487926 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0116 02:56:57.588701  487926 command_runner.go:130] >       "repoTags": [
	I0116 02:56:57.588708  487926 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0116 02:56:57.588712  487926 command_runner.go:130] >       ],
	I0116 02:56:57.588717  487926 command_runner.go:130] >       "repoDigests": [
	I0116 02:56:57.588724  487926 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0116 02:56:57.588734  487926 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0116 02:56:57.588738  487926 command_runner.go:130] >       ],
	I0116 02:56:57.588744  487926 command_runner.go:130] >       "size": "31470524",
	I0116 02:56:57.588749  487926 command_runner.go:130] >       "uid": null,
	I0116 02:56:57.588753  487926 command_runner.go:130] >       "username": "",
	I0116 02:56:57.588760  487926 command_runner.go:130] >       "spec": null,
	I0116 02:56:57.588764  487926 command_runner.go:130] >       "pinned": false
	I0116 02:56:57.588768  487926 command_runner.go:130] >     },
	I0116 02:56:57.588774  487926 command_runner.go:130] >     {
	I0116 02:56:57.588780  487926 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0116 02:56:57.588784  487926 command_runner.go:130] >       "repoTags": [
	I0116 02:56:57.588789  487926 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0116 02:56:57.588795  487926 command_runner.go:130] >       ],
	I0116 02:56:57.588802  487926 command_runner.go:130] >       "repoDigests": [
	I0116 02:56:57.588812  487926 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0116 02:56:57.588819  487926 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0116 02:56:57.588823  487926 command_runner.go:130] >       ],
	I0116 02:56:57.588828  487926 command_runner.go:130] >       "size": "53621675",
	I0116 02:56:57.588833  487926 command_runner.go:130] >       "uid": null,
	I0116 02:56:57.588838  487926 command_runner.go:130] >       "username": "",
	I0116 02:56:57.588844  487926 command_runner.go:130] >       "spec": null,
	I0116 02:56:57.588849  487926 command_runner.go:130] >       "pinned": false
	I0116 02:56:57.588852  487926 command_runner.go:130] >     },
	I0116 02:56:57.588858  487926 command_runner.go:130] >     {
	I0116 02:56:57.588864  487926 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I0116 02:56:57.588874  487926 command_runner.go:130] >       "repoTags": [
	I0116 02:56:57.588879  487926 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0116 02:56:57.588885  487926 command_runner.go:130] >       ],
	I0116 02:56:57.588889  487926 command_runner.go:130] >       "repoDigests": [
	I0116 02:56:57.588897  487926 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I0116 02:56:57.588905  487926 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I0116 02:56:57.588917  487926 command_runner.go:130] >       ],
	I0116 02:56:57.588928  487926 command_runner.go:130] >       "size": "295456551",
	I0116 02:56:57.588932  487926 command_runner.go:130] >       "uid": {
	I0116 02:56:57.588936  487926 command_runner.go:130] >         "value": "0"
	I0116 02:56:57.588940  487926 command_runner.go:130] >       },
	I0116 02:56:57.588943  487926 command_runner.go:130] >       "username": "",
	I0116 02:56:57.588950  487926 command_runner.go:130] >       "spec": null,
	I0116 02:56:57.588954  487926 command_runner.go:130] >       "pinned": false
	I0116 02:56:57.588960  487926 command_runner.go:130] >     },
	I0116 02:56:57.588964  487926 command_runner.go:130] >     {
	I0116 02:56:57.588972  487926 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I0116 02:56:57.588976  487926 command_runner.go:130] >       "repoTags": [
	I0116 02:56:57.588994  487926 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I0116 02:56:57.588997  487926 command_runner.go:130] >       ],
	I0116 02:56:57.589005  487926 command_runner.go:130] >       "repoDigests": [
	I0116 02:56:57.589012  487926 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I0116 02:56:57.589022  487926 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I0116 02:56:57.589028  487926 command_runner.go:130] >       ],
	I0116 02:56:57.589035  487926 command_runner.go:130] >       "size": "127226832",
	I0116 02:56:57.589041  487926 command_runner.go:130] >       "uid": {
	I0116 02:56:57.589046  487926 command_runner.go:130] >         "value": "0"
	I0116 02:56:57.589051  487926 command_runner.go:130] >       },
	I0116 02:56:57.589056  487926 command_runner.go:130] >       "username": "",
	I0116 02:56:57.589062  487926 command_runner.go:130] >       "spec": null,
	I0116 02:56:57.589066  487926 command_runner.go:130] >       "pinned": false
	I0116 02:56:57.589072  487926 command_runner.go:130] >     },
	I0116 02:56:57.589076  487926 command_runner.go:130] >     {
	I0116 02:56:57.589083  487926 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I0116 02:56:57.589089  487926 command_runner.go:130] >       "repoTags": [
	I0116 02:56:57.589094  487926 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I0116 02:56:57.589100  487926 command_runner.go:130] >       ],
	I0116 02:56:57.589104  487926 command_runner.go:130] >       "repoDigests": [
	I0116 02:56:57.589114  487926 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I0116 02:56:57.589123  487926 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I0116 02:56:57.589129  487926 command_runner.go:130] >       ],
	I0116 02:56:57.589134  487926 command_runner.go:130] >       "size": "123261750",
	I0116 02:56:57.589142  487926 command_runner.go:130] >       "uid": {
	I0116 02:56:57.589149  487926 command_runner.go:130] >         "value": "0"
	I0116 02:56:57.589152  487926 command_runner.go:130] >       },
	I0116 02:56:57.589157  487926 command_runner.go:130] >       "username": "",
	I0116 02:56:57.589162  487926 command_runner.go:130] >       "spec": null,
	I0116 02:56:57.589166  487926 command_runner.go:130] >       "pinned": false
	I0116 02:56:57.589171  487926 command_runner.go:130] >     },
	I0116 02:56:57.589175  487926 command_runner.go:130] >     {
	I0116 02:56:57.589183  487926 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I0116 02:56:57.589187  487926 command_runner.go:130] >       "repoTags": [
	I0116 02:56:57.589197  487926 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I0116 02:56:57.589203  487926 command_runner.go:130] >       ],
	I0116 02:56:57.589210  487926 command_runner.go:130] >       "repoDigests": [
	I0116 02:56:57.589222  487926 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I0116 02:56:57.589237  487926 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I0116 02:56:57.589243  487926 command_runner.go:130] >       ],
	I0116 02:56:57.589250  487926 command_runner.go:130] >       "size": "74749335",
	I0116 02:56:57.589260  487926 command_runner.go:130] >       "uid": null,
	I0116 02:56:57.589271  487926 command_runner.go:130] >       "username": "",
	I0116 02:56:57.589280  487926 command_runner.go:130] >       "spec": null,
	I0116 02:56:57.589287  487926 command_runner.go:130] >       "pinned": false
	I0116 02:56:57.589293  487926 command_runner.go:130] >     },
	I0116 02:56:57.589299  487926 command_runner.go:130] >     {
	I0116 02:56:57.589307  487926 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I0116 02:56:57.589314  487926 command_runner.go:130] >       "repoTags": [
	I0116 02:56:57.589319  487926 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I0116 02:56:57.589323  487926 command_runner.go:130] >       ],
	I0116 02:56:57.589327  487926 command_runner.go:130] >       "repoDigests": [
	I0116 02:56:57.589375  487926 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I0116 02:56:57.589391  487926 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I0116 02:56:57.589394  487926 command_runner.go:130] >       ],
	I0116 02:56:57.589398  487926 command_runner.go:130] >       "size": "61551410",
	I0116 02:56:57.589402  487926 command_runner.go:130] >       "uid": {
	I0116 02:56:57.589406  487926 command_runner.go:130] >         "value": "0"
	I0116 02:56:57.589412  487926 command_runner.go:130] >       },
	I0116 02:56:57.589416  487926 command_runner.go:130] >       "username": "",
	I0116 02:56:57.589426  487926 command_runner.go:130] >       "spec": null,
	I0116 02:56:57.589431  487926 command_runner.go:130] >       "pinned": false
	I0116 02:56:57.589434  487926 command_runner.go:130] >     },
	I0116 02:56:57.589438  487926 command_runner.go:130] >     {
	I0116 02:56:57.589446  487926 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0116 02:56:57.589451  487926 command_runner.go:130] >       "repoTags": [
	I0116 02:56:57.589456  487926 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0116 02:56:57.589460  487926 command_runner.go:130] >       ],
	I0116 02:56:57.589467  487926 command_runner.go:130] >       "repoDigests": [
	I0116 02:56:57.589474  487926 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0116 02:56:57.589483  487926 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0116 02:56:57.589489  487926 command_runner.go:130] >       ],
	I0116 02:56:57.589493  487926 command_runner.go:130] >       "size": "750414",
	I0116 02:56:57.589499  487926 command_runner.go:130] >       "uid": {
	I0116 02:56:57.589504  487926 command_runner.go:130] >         "value": "65535"
	I0116 02:56:57.589510  487926 command_runner.go:130] >       },
	I0116 02:56:57.589514  487926 command_runner.go:130] >       "username": "",
	I0116 02:56:57.589520  487926 command_runner.go:130] >       "spec": null,
	I0116 02:56:57.589527  487926 command_runner.go:130] >       "pinned": false
	I0116 02:56:57.589533  487926 command_runner.go:130] >     }
	I0116 02:56:57.589537  487926 command_runner.go:130] >   ]
	I0116 02:56:57.589543  487926 command_runner.go:130] > }
	I0116 02:56:57.589660  487926 crio.go:496] all images are preloaded for cri-o runtime.
	I0116 02:56:57.589672  487926 cache_images.go:84] Images are preloaded, skipping loading
	I0116 02:56:57.589738  487926 ssh_runner.go:195] Run: crio config
	I0116 02:56:57.646147  487926 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0116 02:56:57.646185  487926 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0116 02:56:57.646196  487926 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0116 02:56:57.646202  487926 command_runner.go:130] > #
	I0116 02:56:57.646213  487926 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0116 02:56:57.646223  487926 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0116 02:56:57.646233  487926 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0116 02:56:57.646244  487926 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0116 02:56:57.646256  487926 command_runner.go:130] > # reload'.
	I0116 02:56:57.646271  487926 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0116 02:56:57.646291  487926 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0116 02:56:57.646302  487926 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0116 02:56:57.646311  487926 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0116 02:56:57.646321  487926 command_runner.go:130] > [crio]
	I0116 02:56:57.646331  487926 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0116 02:56:57.646353  487926 command_runner.go:130] > # containers images, in this directory.
	I0116 02:56:57.646361  487926 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0116 02:56:57.646380  487926 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0116 02:56:57.646390  487926 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0116 02:56:57.646400  487926 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0116 02:56:57.646414  487926 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0116 02:56:57.646423  487926 command_runner.go:130] > storage_driver = "overlay"
	I0116 02:56:57.646433  487926 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0116 02:56:57.646447  487926 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0116 02:56:57.646456  487926 command_runner.go:130] > storage_option = [
	I0116 02:56:57.646464  487926 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0116 02:56:57.646473  487926 command_runner.go:130] > ]
	I0116 02:56:57.646483  487926 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0116 02:56:57.646499  487926 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0116 02:56:57.646510  487926 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0116 02:56:57.646518  487926 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0116 02:56:57.646532  487926 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0116 02:56:57.646543  487926 command_runner.go:130] > # always happen on a node reboot
	I0116 02:56:57.646554  487926 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0116 02:56:57.646568  487926 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0116 02:56:57.646581  487926 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0116 02:56:57.646604  487926 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0116 02:56:57.646629  487926 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0116 02:56:57.646643  487926 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0116 02:56:57.646661  487926 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0116 02:56:57.646672  487926 command_runner.go:130] > # internal_wipe = true
	I0116 02:56:57.646683  487926 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0116 02:56:57.646693  487926 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0116 02:56:57.646706  487926 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0116 02:56:57.646719  487926 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0116 02:56:57.646732  487926 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0116 02:56:57.646749  487926 command_runner.go:130] > [crio.api]
	I0116 02:56:57.646763  487926 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0116 02:56:57.646774  487926 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0116 02:56:57.646787  487926 command_runner.go:130] > # IP address on which the stream server will listen.
	I0116 02:56:57.646801  487926 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0116 02:56:57.646816  487926 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0116 02:56:57.646827  487926 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0116 02:56:57.646834  487926 command_runner.go:130] > # stream_port = "0"
	I0116 02:56:57.646846  487926 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0116 02:56:57.646856  487926 command_runner.go:130] > # stream_enable_tls = false
	I0116 02:56:57.646867  487926 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0116 02:56:57.646877  487926 command_runner.go:130] > # stream_idle_timeout = ""
	I0116 02:56:57.646888  487926 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0116 02:56:57.646901  487926 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0116 02:56:57.646911  487926 command_runner.go:130] > # minutes.
	I0116 02:56:57.646919  487926 command_runner.go:130] > # stream_tls_cert = ""
	I0116 02:56:57.646931  487926 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0116 02:56:57.646951  487926 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0116 02:56:57.646962  487926 command_runner.go:130] > # stream_tls_key = ""
	I0116 02:56:57.646969  487926 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0116 02:56:57.646986  487926 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0116 02:56:57.646999  487926 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0116 02:56:57.647007  487926 command_runner.go:130] > # stream_tls_ca = ""
	I0116 02:56:57.647019  487926 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0116 02:56:57.647030  487926 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0116 02:56:57.647046  487926 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0116 02:56:57.647056  487926 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0116 02:56:57.647084  487926 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0116 02:56:57.647104  487926 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0116 02:56:57.647110  487926 command_runner.go:130] > [crio.runtime]
	I0116 02:56:57.647119  487926 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0116 02:56:57.647128  487926 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0116 02:56:57.647134  487926 command_runner.go:130] > # "nofile=1024:2048"
	I0116 02:56:57.647148  487926 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0116 02:56:57.647158  487926 command_runner.go:130] > # default_ulimits = [
	I0116 02:56:57.647164  487926 command_runner.go:130] > # ]
	I0116 02:56:57.647181  487926 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0116 02:56:57.647191  487926 command_runner.go:130] > # no_pivot = false
	I0116 02:56:57.647200  487926 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0116 02:56:57.647213  487926 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0116 02:56:57.647225  487926 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0116 02:56:57.647235  487926 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0116 02:56:57.647247  487926 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0116 02:56:57.647262  487926 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0116 02:56:57.647273  487926 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0116 02:56:57.647284  487926 command_runner.go:130] > # Cgroup setting for conmon
	I0116 02:56:57.647299  487926 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0116 02:56:57.647310  487926 command_runner.go:130] > conmon_cgroup = "pod"
	I0116 02:56:57.647321  487926 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0116 02:56:57.647334  487926 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0116 02:56:57.647349  487926 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0116 02:56:57.647359  487926 command_runner.go:130] > conmon_env = [
	I0116 02:56:57.647369  487926 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0116 02:56:57.647377  487926 command_runner.go:130] > ]
	I0116 02:56:57.647390  487926 command_runner.go:130] > # Additional environment variables to set for all the
	I0116 02:56:57.647402  487926 command_runner.go:130] > # containers. These are overridden if set in the
	I0116 02:56:57.647412  487926 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0116 02:56:57.647423  487926 command_runner.go:130] > # default_env = [
	I0116 02:56:57.647432  487926 command_runner.go:130] > # ]
	I0116 02:56:57.647441  487926 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0116 02:56:57.647451  487926 command_runner.go:130] > # selinux = false
	I0116 02:56:57.647461  487926 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0116 02:56:57.647473  487926 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0116 02:56:57.647485  487926 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0116 02:56:57.647494  487926 command_runner.go:130] > # seccomp_profile = ""
	I0116 02:56:57.647504  487926 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0116 02:56:57.647516  487926 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0116 02:56:57.647527  487926 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0116 02:56:57.647537  487926 command_runner.go:130] > # which might increase security.
	I0116 02:56:57.647547  487926 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0116 02:56:57.647558  487926 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0116 02:56:57.647571  487926 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0116 02:56:57.647590  487926 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0116 02:56:57.647602  487926 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0116 02:56:57.647611  487926 command_runner.go:130] > # This option supports live configuration reload.
	I0116 02:56:57.647626  487926 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0116 02:56:57.647637  487926 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0116 02:56:57.647646  487926 command_runner.go:130] > # the cgroup blockio controller.
	I0116 02:56:57.647657  487926 command_runner.go:130] > # blockio_config_file = ""
	I0116 02:56:57.647673  487926 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0116 02:56:57.647684  487926 command_runner.go:130] > # irqbalance daemon.
	I0116 02:56:57.647694  487926 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0116 02:56:57.647712  487926 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0116 02:56:57.647724  487926 command_runner.go:130] > # This option supports live configuration reload.
	I0116 02:56:57.647731  487926 command_runner.go:130] > # rdt_config_file = ""
	I0116 02:56:57.647742  487926 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0116 02:56:57.647753  487926 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0116 02:56:57.647763  487926 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0116 02:56:57.647773  487926 command_runner.go:130] > # separate_pull_cgroup = ""
	I0116 02:56:57.647786  487926 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0116 02:56:57.647805  487926 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0116 02:56:57.647821  487926 command_runner.go:130] > # will be added.
	I0116 02:56:57.647829  487926 command_runner.go:130] > # default_capabilities = [
	I0116 02:56:57.647839  487926 command_runner.go:130] > # 	"CHOWN",
	I0116 02:56:57.647846  487926 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0116 02:56:57.647854  487926 command_runner.go:130] > # 	"FSETID",
	I0116 02:56:57.647864  487926 command_runner.go:130] > # 	"FOWNER",
	I0116 02:56:57.647871  487926 command_runner.go:130] > # 	"SETGID",
	I0116 02:56:57.647881  487926 command_runner.go:130] > # 	"SETUID",
	I0116 02:56:57.647888  487926 command_runner.go:130] > # 	"SETPCAP",
	I0116 02:56:57.647899  487926 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0116 02:56:57.647908  487926 command_runner.go:130] > # 	"KILL",
	I0116 02:56:57.647914  487926 command_runner.go:130] > # ]
	I0116 02:56:57.647926  487926 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0116 02:56:57.647939  487926 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0116 02:56:57.647945  487926 command_runner.go:130] > # default_sysctls = [
	I0116 02:56:57.647955  487926 command_runner.go:130] > # ]
	I0116 02:56:57.647963  487926 command_runner.go:130] > # List of devices on the host that a
	I0116 02:56:57.647988  487926 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0116 02:56:57.647998  487926 command_runner.go:130] > # allowed_devices = [
	I0116 02:56:57.648004  487926 command_runner.go:130] > # 	"/dev/fuse",
	I0116 02:56:57.648014  487926 command_runner.go:130] > # ]
	I0116 02:56:57.648021  487926 command_runner.go:130] > # List of additional devices. specified as
	I0116 02:56:57.648054  487926 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0116 02:56:57.648067  487926 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0116 02:56:57.648112  487926 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0116 02:56:57.648123  487926 command_runner.go:130] > # additional_devices = [
	I0116 02:56:57.648129  487926 command_runner.go:130] > # ]
	I0116 02:56:57.648140  487926 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0116 02:56:57.648147  487926 command_runner.go:130] > # cdi_spec_dirs = [
	I0116 02:56:57.648157  487926 command_runner.go:130] > # 	"/etc/cdi",
	I0116 02:56:57.648164  487926 command_runner.go:130] > # 	"/var/run/cdi",
	I0116 02:56:57.648172  487926 command_runner.go:130] > # ]
	I0116 02:56:57.648182  487926 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0116 02:56:57.648196  487926 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0116 02:56:57.648205  487926 command_runner.go:130] > # Defaults to false.
	I0116 02:56:57.648217  487926 command_runner.go:130] > # device_ownership_from_security_context = false
	I0116 02:56:57.648230  487926 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0116 02:56:57.648239  487926 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0116 02:56:57.648249  487926 command_runner.go:130] > # hooks_dir = [
	I0116 02:56:57.648257  487926 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0116 02:56:57.648265  487926 command_runner.go:130] > # ]
	I0116 02:56:57.648274  487926 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0116 02:56:57.648287  487926 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0116 02:56:57.648298  487926 command_runner.go:130] > # its default mounts from the following two files:
	I0116 02:56:57.648306  487926 command_runner.go:130] > #
	I0116 02:56:57.648316  487926 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0116 02:56:57.648329  487926 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0116 02:56:57.648339  487926 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0116 02:56:57.648348  487926 command_runner.go:130] > #
	I0116 02:56:57.648357  487926 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0116 02:56:57.648371  487926 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0116 02:56:57.648386  487926 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0116 02:56:57.648399  487926 command_runner.go:130] > #      only add mounts it finds in this file.
	I0116 02:56:57.648414  487926 command_runner.go:130] > #
	I0116 02:56:57.648426  487926 command_runner.go:130] > # default_mounts_file = ""
	I0116 02:56:57.648437  487926 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0116 02:56:57.648452  487926 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0116 02:56:57.648462  487926 command_runner.go:130] > pids_limit = 1024
	I0116 02:56:57.648477  487926 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0116 02:56:57.648490  487926 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0116 02:56:57.648503  487926 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0116 02:56:57.648518  487926 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0116 02:56:57.648525  487926 command_runner.go:130] > # log_size_max = -1
	I0116 02:56:57.648541  487926 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0116 02:56:57.648552  487926 command_runner.go:130] > # log_to_journald = false
	I0116 02:56:57.648567  487926 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0116 02:56:57.648579  487926 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0116 02:56:57.648592  487926 command_runner.go:130] > # Path to directory for container attach sockets.
	I0116 02:56:57.648604  487926 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0116 02:56:57.648616  487926 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0116 02:56:57.648627  487926 command_runner.go:130] > # bind_mount_prefix = ""
	I0116 02:56:57.648648  487926 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0116 02:56:57.648657  487926 command_runner.go:130] > # read_only = false
	I0116 02:56:57.648666  487926 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0116 02:56:57.648683  487926 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0116 02:56:57.648694  487926 command_runner.go:130] > # live configuration reload.
	I0116 02:56:57.648702  487926 command_runner.go:130] > # log_level = "info"
	I0116 02:56:57.648715  487926 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0116 02:56:57.648728  487926 command_runner.go:130] > # This option supports live configuration reload.
	I0116 02:56:57.648737  487926 command_runner.go:130] > # log_filter = ""
	I0116 02:56:57.648746  487926 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0116 02:56:57.648759  487926 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0116 02:56:57.648773  487926 command_runner.go:130] > # separated by comma.
	I0116 02:56:57.648783  487926 command_runner.go:130] > # uid_mappings = ""
	I0116 02:56:57.648792  487926 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0116 02:56:57.648803  487926 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0116 02:56:57.648808  487926 command_runner.go:130] > # separated by comma.
	I0116 02:56:57.648816  487926 command_runner.go:130] > # gid_mappings = ""
	I0116 02:56:57.648825  487926 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0116 02:56:57.648842  487926 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0116 02:56:57.648854  487926 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0116 02:56:57.648863  487926 command_runner.go:130] > # minimum_mappable_uid = -1
	I0116 02:56:57.648876  487926 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0116 02:56:57.648888  487926 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0116 02:56:57.648897  487926 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0116 02:56:57.648901  487926 command_runner.go:130] > # minimum_mappable_gid = -1
	I0116 02:56:57.648907  487926 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0116 02:56:57.648914  487926 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0116 02:56:57.648920  487926 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0116 02:56:57.648926  487926 command_runner.go:130] > # ctr_stop_timeout = 30
	I0116 02:56:57.648932  487926 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0116 02:56:57.648939  487926 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0116 02:56:57.648944  487926 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0116 02:56:57.648951  487926 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0116 02:56:57.648956  487926 command_runner.go:130] > drop_infra_ctr = false
	I0116 02:56:57.648962  487926 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0116 02:56:57.648967  487926 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0116 02:56:57.648977  487926 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0116 02:56:57.648989  487926 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0116 02:56:57.648995  487926 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0116 02:56:57.649000  487926 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0116 02:56:57.649005  487926 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0116 02:56:57.649018  487926 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0116 02:56:57.649028  487926 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0116 02:56:57.649041  487926 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0116 02:56:57.649053  487926 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0116 02:56:57.649066  487926 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0116 02:56:57.649077  487926 command_runner.go:130] > # default_runtime = "runc"
	I0116 02:56:57.649085  487926 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0116 02:56:57.649103  487926 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0116 02:56:57.649121  487926 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0116 02:56:57.649129  487926 command_runner.go:130] > # creation as a file is not desired either.
	I0116 02:56:57.649137  487926 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0116 02:56:57.649144  487926 command_runner.go:130] > # the hostname is being managed dynamically.
	I0116 02:56:57.649151  487926 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0116 02:56:57.649164  487926 command_runner.go:130] > # ]
	I0116 02:56:57.649174  487926 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0116 02:56:57.649189  487926 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0116 02:56:57.649206  487926 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0116 02:56:57.649218  487926 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0116 02:56:57.649227  487926 command_runner.go:130] > #
	I0116 02:56:57.649235  487926 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0116 02:56:57.649246  487926 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0116 02:56:57.649253  487926 command_runner.go:130] > #  runtime_type = "oci"
	I0116 02:56:57.649263  487926 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0116 02:56:57.649274  487926 command_runner.go:130] > #  privileged_without_host_devices = false
	I0116 02:56:57.649281  487926 command_runner.go:130] > #  allowed_annotations = []
	I0116 02:56:57.649290  487926 command_runner.go:130] > # Where:
	I0116 02:56:57.649299  487926 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0116 02:56:57.649311  487926 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0116 02:56:57.649325  487926 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0116 02:56:57.649338  487926 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0116 02:56:57.649345  487926 command_runner.go:130] > #   in $PATH.
	I0116 02:56:57.649358  487926 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0116 02:56:57.649366  487926 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0116 02:56:57.649372  487926 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0116 02:56:57.649378  487926 command_runner.go:130] > #   state.
	I0116 02:56:57.649384  487926 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0116 02:56:57.649393  487926 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0116 02:56:57.649399  487926 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0116 02:56:57.649406  487926 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0116 02:56:57.649412  487926 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0116 02:56:57.649421  487926 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0116 02:56:57.649426  487926 command_runner.go:130] > #   The currently recognized values are:
	I0116 02:56:57.649432  487926 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0116 02:56:57.649443  487926 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0116 02:56:57.649451  487926 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0116 02:56:57.649457  487926 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0116 02:56:57.649466  487926 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0116 02:56:57.649473  487926 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0116 02:56:57.649483  487926 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0116 02:56:57.649496  487926 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0116 02:56:57.649503  487926 command_runner.go:130] > #   should be moved to the container's cgroup
	I0116 02:56:57.649507  487926 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0116 02:56:57.649511  487926 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0116 02:56:57.649518  487926 command_runner.go:130] > runtime_type = "oci"
	I0116 02:56:57.649522  487926 command_runner.go:130] > runtime_root = "/run/runc"
	I0116 02:56:57.649528  487926 command_runner.go:130] > runtime_config_path = ""
	I0116 02:56:57.649532  487926 command_runner.go:130] > monitor_path = ""
	I0116 02:56:57.649536  487926 command_runner.go:130] > monitor_cgroup = ""
	I0116 02:56:57.649541  487926 command_runner.go:130] > monitor_exec_cgroup = ""
	I0116 02:56:57.649549  487926 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0116 02:56:57.649553  487926 command_runner.go:130] > # running containers
	I0116 02:56:57.649559  487926 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0116 02:56:57.649565  487926 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0116 02:56:57.649618  487926 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0116 02:56:57.649636  487926 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0116 02:56:57.649641  487926 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0116 02:56:57.649645  487926 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0116 02:56:57.649652  487926 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0116 02:56:57.649656  487926 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0116 02:56:57.649661  487926 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0116 02:56:57.649668  487926 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0116 02:56:57.649674  487926 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0116 02:56:57.649681  487926 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0116 02:56:57.649687  487926 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0116 02:56:57.649696  487926 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0116 02:56:57.649706  487926 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0116 02:56:57.649712  487926 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0116 02:56:57.649721  487926 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0116 02:56:57.649734  487926 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0116 02:56:57.649742  487926 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0116 02:56:57.649749  487926 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0116 02:56:57.649755  487926 command_runner.go:130] > # Example:
	I0116 02:56:57.649760  487926 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0116 02:56:57.649765  487926 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0116 02:56:57.649772  487926 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0116 02:56:57.649780  487926 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0116 02:56:57.649785  487926 command_runner.go:130] > # cpuset = 0
	I0116 02:56:57.649789  487926 command_runner.go:130] > # cpushares = "0-1"
	I0116 02:56:57.649793  487926 command_runner.go:130] > # Where:
	I0116 02:56:57.649798  487926 command_runner.go:130] > # The workload name is workload-type.
	I0116 02:56:57.649807  487926 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0116 02:56:57.649813  487926 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0116 02:56:57.649820  487926 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0116 02:56:57.649827  487926 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0116 02:56:57.649835  487926 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0116 02:56:57.649839  487926 command_runner.go:130] > # 
	I0116 02:56:57.649845  487926 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0116 02:56:57.649851  487926 command_runner.go:130] > #
	I0116 02:56:57.649856  487926 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0116 02:56:57.649864  487926 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0116 02:56:57.649870  487926 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0116 02:56:57.649878  487926 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0116 02:56:57.649884  487926 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0116 02:56:57.649892  487926 command_runner.go:130] > [crio.image]
	I0116 02:56:57.649900  487926 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0116 02:56:57.649906  487926 command_runner.go:130] > # default_transport = "docker://"
	I0116 02:56:57.649912  487926 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0116 02:56:57.649918  487926 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0116 02:56:57.649925  487926 command_runner.go:130] > # global_auth_file = ""
	I0116 02:56:57.649930  487926 command_runner.go:130] > # The image used to instantiate infra containers.
	I0116 02:56:57.649937  487926 command_runner.go:130] > # This option supports live configuration reload.
	I0116 02:56:57.649942  487926 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0116 02:56:57.649950  487926 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0116 02:56:57.649956  487926 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0116 02:56:57.649961  487926 command_runner.go:130] > # This option supports live configuration reload.
	I0116 02:56:57.649967  487926 command_runner.go:130] > # pause_image_auth_file = ""
	I0116 02:56:57.649972  487926 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0116 02:56:57.649982  487926 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0116 02:56:57.649988  487926 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0116 02:56:57.649993  487926 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0116 02:56:57.649997  487926 command_runner.go:130] > # pause_command = "/pause"
	I0116 02:56:57.650005  487926 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0116 02:56:57.650011  487926 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0116 02:56:57.650017  487926 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0116 02:56:57.650022  487926 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0116 02:56:57.650027  487926 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0116 02:56:57.650031  487926 command_runner.go:130] > # signature_policy = ""
	I0116 02:56:57.650037  487926 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0116 02:56:57.650046  487926 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0116 02:56:57.650053  487926 command_runner.go:130] > # changing them here.
	I0116 02:56:57.650057  487926 command_runner.go:130] > # insecure_registries = [
	I0116 02:56:57.650063  487926 command_runner.go:130] > # ]
	I0116 02:56:57.650068  487926 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0116 02:56:57.650076  487926 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0116 02:56:57.650080  487926 command_runner.go:130] > # image_volumes = "mkdir"
	I0116 02:56:57.650087  487926 command_runner.go:130] > # Temporary directory to use for storing big files
	I0116 02:56:57.650091  487926 command_runner.go:130] > # big_files_temporary_dir = ""
	I0116 02:56:57.650097  487926 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0116 02:56:57.650101  487926 command_runner.go:130] > # CNI plugins.
	I0116 02:56:57.650107  487926 command_runner.go:130] > [crio.network]
	I0116 02:56:57.650116  487926 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0116 02:56:57.650121  487926 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0116 02:56:57.650128  487926 command_runner.go:130] > # cni_default_network = ""
	I0116 02:56:57.650136  487926 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0116 02:56:57.650147  487926 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0116 02:56:57.650169  487926 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0116 02:56:57.650180  487926 command_runner.go:130] > # plugin_dirs = [
	I0116 02:56:57.650186  487926 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0116 02:56:57.650194  487926 command_runner.go:130] > # ]
	I0116 02:56:57.650204  487926 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0116 02:56:57.650213  487926 command_runner.go:130] > [crio.metrics]
	I0116 02:56:57.650221  487926 command_runner.go:130] > # Globally enable or disable metrics support.
	I0116 02:56:57.650230  487926 command_runner.go:130] > enable_metrics = true
	I0116 02:56:57.650237  487926 command_runner.go:130] > # Specify enabled metrics collectors.
	I0116 02:56:57.650247  487926 command_runner.go:130] > # Per default all metrics are enabled.
	I0116 02:56:57.650256  487926 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0116 02:56:57.650265  487926 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0116 02:56:57.650274  487926 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0116 02:56:57.650280  487926 command_runner.go:130] > # metrics_collectors = [
	I0116 02:56:57.650284  487926 command_runner.go:130] > # 	"operations",
	I0116 02:56:57.650290  487926 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0116 02:56:57.650294  487926 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0116 02:56:57.650300  487926 command_runner.go:130] > # 	"operations_errors",
	I0116 02:56:57.650304  487926 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0116 02:56:57.650313  487926 command_runner.go:130] > # 	"image_pulls_by_name",
	I0116 02:56:57.650317  487926 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0116 02:56:57.650321  487926 command_runner.go:130] > # 	"image_pulls_failures",
	I0116 02:56:57.650326  487926 command_runner.go:130] > # 	"image_pulls_successes",
	I0116 02:56:57.650332  487926 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0116 02:56:57.650337  487926 command_runner.go:130] > # 	"image_layer_reuse",
	I0116 02:56:57.650343  487926 command_runner.go:130] > # 	"containers_oom_total",
	I0116 02:56:57.650347  487926 command_runner.go:130] > # 	"containers_oom",
	I0116 02:56:57.650355  487926 command_runner.go:130] > # 	"processes_defunct",
	I0116 02:56:57.650361  487926 command_runner.go:130] > # 	"operations_total",
	I0116 02:56:57.650365  487926 command_runner.go:130] > # 	"operations_latency_seconds",
	I0116 02:56:57.650372  487926 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0116 02:56:57.650377  487926 command_runner.go:130] > # 	"operations_errors_total",
	I0116 02:56:57.650382  487926 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0116 02:56:57.650388  487926 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0116 02:56:57.650393  487926 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0116 02:56:57.650401  487926 command_runner.go:130] > # 	"image_pulls_success_total",
	I0116 02:56:57.650405  487926 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0116 02:56:57.650409  487926 command_runner.go:130] > # 	"containers_oom_count_total",
	I0116 02:56:57.650413  487926 command_runner.go:130] > # ]
	I0116 02:56:57.650418  487926 command_runner.go:130] > # The port on which the metrics server will listen.
	I0116 02:56:57.650425  487926 command_runner.go:130] > # metrics_port = 9090
	I0116 02:56:57.650432  487926 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0116 02:56:57.650441  487926 command_runner.go:130] > # metrics_socket = ""
	I0116 02:56:57.650450  487926 command_runner.go:130] > # The certificate for the secure metrics server.
	I0116 02:56:57.650462  487926 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0116 02:56:57.650474  487926 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0116 02:56:57.650486  487926 command_runner.go:130] > # certificate on any modification event.
	I0116 02:56:57.650493  487926 command_runner.go:130] > # metrics_cert = ""
	I0116 02:56:57.650502  487926 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0116 02:56:57.650510  487926 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0116 02:56:57.650514  487926 command_runner.go:130] > # metrics_key = ""
	I0116 02:56:57.650520  487926 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0116 02:56:57.650524  487926 command_runner.go:130] > [crio.tracing]
	I0116 02:56:57.650530  487926 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0116 02:56:57.650536  487926 command_runner.go:130] > # enable_tracing = false
	I0116 02:56:57.650543  487926 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0116 02:56:57.650550  487926 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0116 02:56:57.650555  487926 command_runner.go:130] > # Number of samples to collect per million spans.
	I0116 02:56:57.650562  487926 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0116 02:56:57.650568  487926 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0116 02:56:57.650572  487926 command_runner.go:130] > [crio.stats]
	I0116 02:56:57.650579  487926 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0116 02:56:57.650584  487926 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0116 02:56:57.650589  487926 command_runner.go:130] > # stats_collection_period = 0
	I0116 02:56:57.650842  487926 command_runner.go:130] ! time="2024-01-16 02:56:57.635708621Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0116 02:56:57.650867  487926 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0116 02:56:57.651096  487926 cni.go:84] Creating CNI manager for ""
	I0116 02:56:57.651114  487926 cni.go:136] 1 nodes found, recommending kindnet
	I0116 02:56:57.651136  487926 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 02:56:57.651162  487926 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.70 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-405494 NodeName:multinode-405494 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.70"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.70 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0116 02:56:57.651372  487926 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.70
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-405494"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.70
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.70"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 02:56:57.651459  487926 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-405494 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.70
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-405494 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0116 02:56:57.651615  487926 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0116 02:56:57.660898  487926 command_runner.go:130] > kubeadm
	I0116 02:56:57.660924  487926 command_runner.go:130] > kubectl
	I0116 02:56:57.660931  487926 command_runner.go:130] > kubelet
	I0116 02:56:57.660962  487926 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 02:56:57.661033  487926 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0116 02:56:57.669624  487926 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0116 02:56:57.685385  487926 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0116 02:56:57.701012  487926 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2100 bytes)
	I0116 02:56:57.717610  487926 ssh_runner.go:195] Run: grep 192.168.39.70	control-plane.minikube.internal$ /etc/hosts
	I0116 02:56:57.722566  487926 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.70	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 02:56:57.735818  487926 certs.go:56] Setting up /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/multinode-405494 for IP: 192.168.39.70
	I0116 02:56:57.735874  487926 certs.go:190] acquiring lock for shared ca certs: {Name:mkab5aeea3e0ed69f3592dc8f418e07c6075b130 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:56:57.736087  487926 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17965-468241/.minikube/ca.key
	I0116 02:56:57.736174  487926 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.key
	I0116 02:56:57.736229  487926 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/multinode-405494/client.key
	I0116 02:56:57.736242  487926 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/multinode-405494/client.crt with IP's: []
	I0116 02:56:57.837866  487926 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/multinode-405494/client.crt ...
	I0116 02:56:57.837914  487926 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/multinode-405494/client.crt: {Name:mk2cf5c5baf8c86b6d77a566b334185d62383cfd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:56:57.838098  487926 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/multinode-405494/client.key ...
	I0116 02:56:57.838111  487926 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/multinode-405494/client.key: {Name:mk3ccf5ff0b090cec8287c7aa859c847b3f8f638 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:56:57.838180  487926 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/multinode-405494/apiserver.key.5467de6f
	I0116 02:56:57.838194  487926 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/multinode-405494/apiserver.crt.5467de6f with IP's: [192.168.39.70 10.96.0.1 127.0.0.1 10.0.0.1]
	I0116 02:56:57.902155  487926 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/multinode-405494/apiserver.crt.5467de6f ...
	I0116 02:56:57.902194  487926 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/multinode-405494/apiserver.crt.5467de6f: {Name:mk6e8608ca413822e383cf518e87f481513d7a9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:56:57.902353  487926 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/multinode-405494/apiserver.key.5467de6f ...
	I0116 02:56:57.902367  487926 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/multinode-405494/apiserver.key.5467de6f: {Name:mk8c40f5681cfdc67bef0977207ed1c53ead9042 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:56:57.902428  487926 certs.go:337] copying /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/multinode-405494/apiserver.crt.5467de6f -> /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/multinode-405494/apiserver.crt
	I0116 02:56:57.902577  487926 certs.go:341] copying /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/multinode-405494/apiserver.key.5467de6f -> /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/multinode-405494/apiserver.key
	I0116 02:56:57.902635  487926 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/multinode-405494/proxy-client.key
	I0116 02:56:57.902650  487926 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/multinode-405494/proxy-client.crt with IP's: []
	I0116 02:56:57.988513  487926 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/multinode-405494/proxy-client.crt ...
	I0116 02:56:57.988551  487926 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/multinode-405494/proxy-client.crt: {Name:mk844ca09e26b00f5aed6029ea912538ea88dfc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:56:57.988727  487926 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/multinode-405494/proxy-client.key ...
	I0116 02:56:57.988745  487926 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/multinode-405494/proxy-client.key: {Name:mkcbb37e98e0a5972221348834d45551db280e9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:56:57.988822  487926 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/multinode-405494/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0116 02:56:57.988848  487926 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/multinode-405494/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0116 02:56:57.988863  487926 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/multinode-405494/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0116 02:56:57.988875  487926 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/multinode-405494/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0116 02:56:57.988885  487926 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0116 02:56:57.988895  487926 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0116 02:56:57.988907  487926 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0116 02:56:57.988919  487926 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0116 02:56:57.988970  487926 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478.pem (1338 bytes)
	W0116 02:56:57.989003  487926 certs.go:433] ignoring /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478_empty.pem, impossibly tiny 0 bytes
	I0116 02:56:57.989013  487926 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca-key.pem (1679 bytes)
	I0116 02:56:57.989039  487926 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem (1078 bytes)
	I0116 02:56:57.989064  487926 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem (1123 bytes)
	I0116 02:56:57.989090  487926 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem (1679 bytes)
	I0116 02:56:57.989133  487926 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem (1708 bytes)
	I0116 02:56:57.989157  487926 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478.pem -> /usr/share/ca-certificates/475478.pem
	I0116 02:56:57.989173  487926 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem -> /usr/share/ca-certificates/4754782.pem
	I0116 02:56:57.989188  487926 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0116 02:56:57.989828  487926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/multinode-405494/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0116 02:56:58.013961  487926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/multinode-405494/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0116 02:56:58.037746  487926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/multinode-405494/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0116 02:56:58.061087  487926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/multinode-405494/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0116 02:56:58.084378  487926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 02:56:58.106398  487926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0116 02:56:58.129257  487926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 02:56:58.152608  487926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 02:56:58.175539  487926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478.pem --> /usr/share/ca-certificates/475478.pem (1338 bytes)
	I0116 02:56:58.197387  487926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem --> /usr/share/ca-certificates/4754782.pem (1708 bytes)
	I0116 02:56:58.219222  487926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 02:56:58.242212  487926 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0116 02:56:58.259918  487926 ssh_runner.go:195] Run: openssl version
	I0116 02:56:58.265651  487926 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0116 02:56:58.266058  487926 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 02:56:58.277710  487926 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 02:56:58.282568  487926 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan 16 02:35 /usr/share/ca-certificates/minikubeCA.pem
	I0116 02:56:58.282793  487926 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 02:35 /usr/share/ca-certificates/minikubeCA.pem
	I0116 02:56:58.282863  487926 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 02:56:58.288555  487926 command_runner.go:130] > b5213941
	I0116 02:56:58.288653  487926 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 02:56:58.300952  487926 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/475478.pem && ln -fs /usr/share/ca-certificates/475478.pem /etc/ssl/certs/475478.pem"
	I0116 02:56:58.313584  487926 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/475478.pem
	I0116 02:56:58.318285  487926 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan 16 02:44 /usr/share/ca-certificates/475478.pem
	I0116 02:56:58.318385  487926 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 02:44 /usr/share/ca-certificates/475478.pem
	I0116 02:56:58.318445  487926 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/475478.pem
	I0116 02:56:58.324223  487926 command_runner.go:130] > 51391683
	I0116 02:56:58.324312  487926 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/475478.pem /etc/ssl/certs/51391683.0"
	I0116 02:56:58.335229  487926 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4754782.pem && ln -fs /usr/share/ca-certificates/4754782.pem /etc/ssl/certs/4754782.pem"
	I0116 02:56:58.346246  487926 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4754782.pem
	I0116 02:56:58.350678  487926 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan 16 02:44 /usr/share/ca-certificates/4754782.pem
	I0116 02:56:58.350956  487926 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 02:44 /usr/share/ca-certificates/4754782.pem
	I0116 02:56:58.351022  487926 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4754782.pem
	I0116 02:56:58.356102  487926 command_runner.go:130] > 3ec20f2e
	I0116 02:56:58.356338  487926 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4754782.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 02:56:58.366713  487926 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 02:56:58.370680  487926 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0116 02:56:58.370913  487926 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0116 02:56:58.370963  487926 kubeadm.go:404] StartCluster: {Name:multinode-405494 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.4 ClusterName:multinode-405494 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.70 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 02:56:58.371050  487926 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0116 02:56:58.371111  487926 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 02:56:58.412129  487926 cri.go:89] found id: ""
	I0116 02:56:58.412207  487926 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0116 02:56:58.422335  487926 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0116 02:56:58.422368  487926 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0116 02:56:58.422375  487926 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0116 02:56:58.422462  487926 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 02:56:58.432606  487926 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 02:56:58.442700  487926 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0116 02:56:58.442743  487926 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0116 02:56:58.442753  487926 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0116 02:56:58.442762  487926 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 02:56:58.442902  487926 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 02:56:58.442947  487926 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0116 02:56:58.565279  487926 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0116 02:56:58.565315  487926 command_runner.go:130] > [init] Using Kubernetes version: v1.28.4
	I0116 02:56:58.565368  487926 kubeadm.go:322] [preflight] Running pre-flight checks
	I0116 02:56:58.565374  487926 command_runner.go:130] > [preflight] Running pre-flight checks
	I0116 02:56:58.829240  487926 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0116 02:56:58.829285  487926 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0116 02:56:58.829430  487926 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0116 02:56:58.829447  487926 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0116 02:56:58.829569  487926 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0116 02:56:58.829581  487926 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0116 02:56:59.062522  487926 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0116 02:56:59.062665  487926 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0116 02:56:59.255038  487926 out.go:204]   - Generating certificates and keys ...
	I0116 02:56:59.255212  487926 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0116 02:56:59.255230  487926 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0116 02:56:59.255315  487926 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0116 02:56:59.255325  487926 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0116 02:56:59.255430  487926 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0116 02:56:59.255442  487926 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0116 02:56:59.330570  487926 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0116 02:56:59.330614  487926 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0116 02:56:59.507705  487926 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0116 02:56:59.507745  487926 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0116 02:56:59.637040  487926 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0116 02:56:59.637074  487926 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0116 02:56:59.745363  487926 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0116 02:56:59.745459  487926 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0116 02:56:59.745646  487926 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-405494] and IPs [192.168.39.70 127.0.0.1 ::1]
	I0116 02:56:59.745666  487926 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-405494] and IPs [192.168.39.70 127.0.0.1 ::1]
	I0116 02:57:00.101004  487926 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0116 02:57:00.101044  487926 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0116 02:57:00.101179  487926 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-405494] and IPs [192.168.39.70 127.0.0.1 ::1]
	I0116 02:57:00.101193  487926 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-405494] and IPs [192.168.39.70 127.0.0.1 ::1]
	I0116 02:57:00.309883  487926 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0116 02:57:00.309917  487926 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0116 02:57:00.381218  487926 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0116 02:57:00.381258  487926 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0116 02:57:00.713469  487926 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0116 02:57:00.713537  487926 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0116 02:57:00.713660  487926 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0116 02:57:00.713675  487926 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0116 02:57:00.915829  487926 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0116 02:57:00.915877  487926 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0116 02:57:01.007532  487926 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0116 02:57:01.007566  487926 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0116 02:57:01.142889  487926 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0116 02:57:01.142922  487926 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0116 02:57:01.474820  487926 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0116 02:57:01.474875  487926 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0116 02:57:01.475545  487926 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0116 02:57:01.475561  487926 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0116 02:57:01.478822  487926 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0116 02:57:01.480886  487926 out.go:204]   - Booting up control plane ...
	I0116 02:57:01.478911  487926 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0116 02:57:01.481019  487926 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0116 02:57:01.481045  487926 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0116 02:57:01.481137  487926 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0116 02:57:01.481160  487926 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0116 02:57:01.481234  487926 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0116 02:57:01.481244  487926 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0116 02:57:01.498456  487926 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0116 02:57:01.498525  487926 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0116 02:57:01.498793  487926 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0116 02:57:01.498829  487926 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0116 02:57:01.498882  487926 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0116 02:57:01.498922  487926 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0116 02:57:01.629907  487926 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0116 02:57:01.629938  487926 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0116 02:57:10.133298  487926 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.505280 seconds
	I0116 02:57:10.133340  487926 command_runner.go:130] > [apiclient] All control plane components are healthy after 8.505280 seconds
	I0116 02:57:10.133456  487926 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0116 02:57:10.133467  487926 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0116 02:57:10.159011  487926 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0116 02:57:10.159018  487926 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0116 02:57:10.695958  487926 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0116 02:57:10.695991  487926 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0116 02:57:10.696188  487926 kubeadm.go:322] [mark-control-plane] Marking the node multinode-405494 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0116 02:57:10.696198  487926 command_runner.go:130] > [mark-control-plane] Marking the node multinode-405494 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0116 02:57:11.216389  487926 kubeadm.go:322] [bootstrap-token] Using token: kslim2.3zz5ejut8zg9igbw
	I0116 02:57:11.218199  487926 out.go:204]   - Configuring RBAC rules ...
	I0116 02:57:11.216461  487926 command_runner.go:130] > [bootstrap-token] Using token: kslim2.3zz5ejut8zg9igbw
	I0116 02:57:11.218330  487926 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0116 02:57:11.218349  487926 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0116 02:57:11.227569  487926 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0116 02:57:11.227604  487926 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0116 02:57:11.237358  487926 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0116 02:57:11.237388  487926 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0116 02:57:11.242260  487926 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0116 02:57:11.242289  487926 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0116 02:57:11.247361  487926 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0116 02:57:11.247377  487926 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0116 02:57:11.263155  487926 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0116 02:57:11.263169  487926 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0116 02:57:11.279803  487926 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0116 02:57:11.279829  487926 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0116 02:57:11.532524  487926 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0116 02:57:11.532562  487926 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0116 02:57:11.637337  487926 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0116 02:57:11.637376  487926 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0116 02:57:11.637381  487926 kubeadm.go:322] 
	I0116 02:57:11.637437  487926 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0116 02:57:11.637447  487926 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0116 02:57:11.637451  487926 kubeadm.go:322] 
	I0116 02:57:11.637537  487926 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0116 02:57:11.637544  487926 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0116 02:57:11.637548  487926 kubeadm.go:322] 
	I0116 02:57:11.637580  487926 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0116 02:57:11.637591  487926 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0116 02:57:11.637674  487926 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0116 02:57:11.637699  487926 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0116 02:57:11.637741  487926 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0116 02:57:11.637747  487926 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0116 02:57:11.637751  487926 kubeadm.go:322] 
	I0116 02:57:11.637794  487926 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0116 02:57:11.637800  487926 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0116 02:57:11.637804  487926 kubeadm.go:322] 
	I0116 02:57:11.637870  487926 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0116 02:57:11.637877  487926 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0116 02:57:11.637881  487926 kubeadm.go:322] 
	I0116 02:57:11.637922  487926 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0116 02:57:11.637928  487926 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0116 02:57:11.638048  487926 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0116 02:57:11.638055  487926 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0116 02:57:11.638110  487926 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0116 02:57:11.638116  487926 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0116 02:57:11.638119  487926 kubeadm.go:322] 
	I0116 02:57:11.638192  487926 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0116 02:57:11.638199  487926 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0116 02:57:11.638260  487926 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0116 02:57:11.638271  487926 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0116 02:57:11.638275  487926 kubeadm.go:322] 
	I0116 02:57:11.638375  487926 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token kslim2.3zz5ejut8zg9igbw \
	I0116 02:57:11.638390  487926 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token kslim2.3zz5ejut8zg9igbw \
	I0116 02:57:11.638495  487926 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ab75761e1119a1709a216b682cd7b1dcce0641115df5f236b54b16e4f66aa044 \
	I0116 02:57:11.638504  487926 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:ab75761e1119a1709a216b682cd7b1dcce0641115df5f236b54b16e4f66aa044 \
	I0116 02:57:11.638524  487926 kubeadm.go:322] 	--control-plane 
	I0116 02:57:11.638534  487926 command_runner.go:130] > 	--control-plane 
	I0116 02:57:11.638542  487926 kubeadm.go:322] 
	I0116 02:57:11.638653  487926 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0116 02:57:11.638684  487926 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0116 02:57:11.638697  487926 kubeadm.go:322] 
	I0116 02:57:11.638834  487926 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token kslim2.3zz5ejut8zg9igbw \
	I0116 02:57:11.638848  487926 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token kslim2.3zz5ejut8zg9igbw \
	I0116 02:57:11.638999  487926 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ab75761e1119a1709a216b682cd7b1dcce0641115df5f236b54b16e4f66aa044 
	I0116 02:57:11.639011  487926 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:ab75761e1119a1709a216b682cd7b1dcce0641115df5f236b54b16e4f66aa044 
	I0116 02:57:11.639722  487926 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0116 02:57:11.639740  487926 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0116 02:57:11.639754  487926 cni.go:84] Creating CNI manager for ""
	I0116 02:57:11.639763  487926 cni.go:136] 1 nodes found, recommending kindnet
	I0116 02:57:11.642039  487926 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0116 02:57:11.643734  487926 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0116 02:57:11.655718  487926 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0116 02:57:11.655756  487926 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0116 02:57:11.655765  487926 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0116 02:57:11.655774  487926 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0116 02:57:11.655783  487926 command_runner.go:130] > Access: 2024-01-16 02:56:39.075162599 +0000
	I0116 02:57:11.655791  487926 command_runner.go:130] > Modify: 2023-12-28 22:53:36.000000000 +0000
	I0116 02:57:11.655800  487926 command_runner.go:130] > Change: 2024-01-16 02:56:37.085162599 +0000
	I0116 02:57:11.655806  487926 command_runner.go:130] >  Birth: -
	I0116 02:57:11.656200  487926 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0116 02:57:11.656225  487926 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0116 02:57:11.697948  487926 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0116 02:57:12.730500  487926 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0116 02:57:12.740802  487926 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0116 02:57:12.751538  487926 command_runner.go:130] > serviceaccount/kindnet created
	I0116 02:57:12.768001  487926 command_runner.go:130] > daemonset.apps/kindnet created
	I0116 02:57:12.770532  487926 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.072541077s)
	I0116 02:57:12.770584  487926 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0116 02:57:12.770681  487926 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:57:12.770729  487926 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=6e8fa5f64d0e7272be43ff25ed3826261f0a2578 minikube.k8s.io/name=multinode-405494 minikube.k8s.io/updated_at=2024_01_16T02_57_12_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:57:12.800823  487926 command_runner.go:130] > -16
	I0116 02:57:12.800894  487926 ops.go:34] apiserver oom_adj: -16
	I0116 02:57:12.951302  487926 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0116 02:57:12.951442  487926 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:57:12.951487  487926 command_runner.go:130] > node/multinode-405494 labeled
	I0116 02:57:13.117536  487926 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 02:57:13.452450  487926 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:57:13.539089  487926 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 02:57:13.951611  487926 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:57:14.045036  487926 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 02:57:14.451621  487926 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:57:14.543340  487926 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 02:57:14.951868  487926 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:57:15.039422  487926 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 02:57:15.452029  487926 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:57:15.553234  487926 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 02:57:15.951800  487926 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:57:16.036953  487926 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 02:57:16.452331  487926 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:57:16.540315  487926 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 02:57:16.952067  487926 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:57:17.048867  487926 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 02:57:17.451518  487926 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:57:17.546798  487926 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 02:57:17.951589  487926 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:57:18.040410  487926 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 02:57:18.451887  487926 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:57:18.551381  487926 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 02:57:18.952093  487926 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:57:19.043778  487926 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 02:57:19.452391  487926 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:57:19.543660  487926 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 02:57:19.952286  487926 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:57:20.038633  487926 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 02:57:20.452304  487926 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:57:20.540160  487926 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 02:57:20.951891  487926 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:57:21.047630  487926 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 02:57:21.452369  487926 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:57:21.545511  487926 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 02:57:21.952177  487926 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:57:22.065181  487926 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 02:57:22.451717  487926 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:57:22.537495  487926 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 02:57:22.952095  487926 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:57:23.056184  487926 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 02:57:23.451497  487926 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:57:23.590359  487926 command_runner.go:130] > NAME      SECRETS   AGE
	I0116 02:57:23.590395  487926 command_runner.go:130] > default   0         0s
	I0116 02:57:23.590436  487926 kubeadm.go:1088] duration metric: took 10.819809458s to wait for elevateKubeSystemPrivileges.
	I0116 02:57:23.590464  487926 kubeadm.go:406] StartCluster complete in 25.219507157s
	I0116 02:57:23.590542  487926 settings.go:142] acquiring lock: {Name:mk3ded99c06d285b174e76622bb0741c8dd2d8fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:57:23.590643  487926 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17965-468241/kubeconfig
	I0116 02:57:23.591754  487926 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-468241/kubeconfig: {Name:mkac3c63bbcba9b4fa1bea3480797757d703fb9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:57:23.592040  487926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0116 02:57:23.592227  487926 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0116 02:57:23.592327  487926 addons.go:69] Setting storage-provisioner=true in profile "multinode-405494"
	I0116 02:57:23.592356  487926 addons.go:234] Setting addon storage-provisioner=true in "multinode-405494"
	I0116 02:57:23.592357  487926 addons.go:69] Setting default-storageclass=true in profile "multinode-405494"
	I0116 02:57:23.592396  487926 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-405494"
	I0116 02:57:23.592418  487926 host.go:66] Checking if "multinode-405494" exists ...
	I0116 02:57:23.592417  487926 config.go:182] Loaded profile config "multinode-405494": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 02:57:23.592509  487926 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17965-468241/kubeconfig
	I0116 02:57:23.592856  487926 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:57:23.592902  487926 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:57:23.592957  487926 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:57:23.593007  487926 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:57:23.592946  487926 kapi.go:59] client config for multinode-405494: &rest.Config{Host:"https://192.168.39.70:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17965-468241/.minikube/profiles/multinode-405494/client.crt", KeyFile:"/home/jenkins/minikube-integration/17965-468241/.minikube/profiles/multinode-405494/client.key", CAFile:"/home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19be0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0116 02:57:23.593866  487926 cert_rotation.go:137] Starting client certificate rotation controller
	I0116 02:57:23.594209  487926 round_trippers.go:463] GET https://192.168.39.70:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0116 02:57:23.594229  487926 round_trippers.go:469] Request Headers:
	I0116 02:57:23.594241  487926 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:23.594249  487926 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:23.614146  487926 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33295
	I0116 02:57:23.614216  487926 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39583
	I0116 02:57:23.614290  487926 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I0116 02:57:23.614308  487926 round_trippers.go:577] Response Headers:
	I0116 02:57:23.614322  487926 round_trippers.go:580]     Content-Length: 291
	I0116 02:57:23.614331  487926 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:23 GMT
	I0116 02:57:23.614341  487926 round_trippers.go:580]     Audit-Id: 716d407e-3c4a-452d-b4ac-0932cabc8e72
	I0116 02:57:23.614351  487926 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:23.614361  487926 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:23.614369  487926 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 02:57:23.614388  487926 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 02:57:23.614673  487926 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:57:23.614748  487926 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:57:23.615244  487926 main.go:141] libmachine: Using API Version  1
	I0116 02:57:23.615249  487926 main.go:141] libmachine: Using API Version  1
	I0116 02:57:23.615265  487926 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:57:23.615301  487926 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:57:23.615647  487926 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:57:23.615729  487926 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:57:23.615863  487926 main.go:141] libmachine: (multinode-405494) Calling .GetState
	I0116 02:57:23.616449  487926 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:57:23.616501  487926 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:57:23.618629  487926 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17965-468241/kubeconfig
	I0116 02:57:23.619003  487926 kapi.go:59] client config for multinode-405494: &rest.Config{Host:"https://192.168.39.70:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17965-468241/.minikube/profiles/multinode-405494/client.crt", KeyFile:"/home/jenkins/minikube-integration/17965-468241/.minikube/profiles/multinode-405494/client.key", CAFile:"/home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19be0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0116 02:57:23.619112  487926 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"dd77c785-c90f-4789-97cb-f593b7a7a7e2","resourceVersion":"348","creationTimestamp":"2024-01-16T02:57:11Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0116 02:57:23.619414  487926 addons.go:234] Setting addon default-storageclass=true in "multinode-405494"
	I0116 02:57:23.619469  487926 host.go:66] Checking if "multinode-405494" exists ...
	I0116 02:57:23.619877  487926 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"dd77c785-c90f-4789-97cb-f593b7a7a7e2","resourceVersion":"348","creationTimestamp":"2024-01-16T02:57:11Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0116 02:57:23.619954  487926 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:57:23.619961  487926 round_trippers.go:463] PUT https://192.168.39.70:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0116 02:57:23.619975  487926 round_trippers.go:469] Request Headers:
	I0116 02:57:23.619993  487926 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:23.620003  487926 round_trippers.go:473]     Content-Type: application/json
	I0116 02:57:23.620008  487926 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:57:23.620012  487926 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:23.633798  487926 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44717
	I0116 02:57:23.634303  487926 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:57:23.634885  487926 main.go:141] libmachine: Using API Version  1
	I0116 02:57:23.634916  487926 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:57:23.635277  487926 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:57:23.635460  487926 main.go:141] libmachine: (multinode-405494) Calling .GetState
	I0116 02:57:23.635658  487926 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39981
	I0116 02:57:23.636106  487926 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:57:23.640008  487926 main.go:141] libmachine: (multinode-405494) Calling .DriverName
	I0116 02:57:23.640048  487926 main.go:141] libmachine: Using API Version  1
	I0116 02:57:23.640097  487926 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:57:23.642741  487926 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 02:57:23.640888  487926 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:57:23.644279  487926 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 02:57:23.644301  487926 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0116 02:57:23.644330  487926 main.go:141] libmachine: (multinode-405494) Calling .GetSSHHostname
	I0116 02:57:23.644867  487926 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:57:23.644917  487926 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:57:23.647890  487926 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 02:57:23.648456  487926 main.go:141] libmachine: (multinode-405494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:49:7b", ip: ""} in network mk-multinode-405494: {Iface:virbr1 ExpiryTime:2024-01-16 03:56:41 +0000 UTC Type:0 Mac:52:54:00:b0:49:7b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:multinode-405494 Clientid:01:52:54:00:b0:49:7b}
	I0116 02:57:23.648486  487926 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined IP address 192.168.39.70 and MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 02:57:23.648635  487926 main.go:141] libmachine: (multinode-405494) Calling .GetSSHPort
	I0116 02:57:23.648847  487926 main.go:141] libmachine: (multinode-405494) Calling .GetSSHKeyPath
	I0116 02:57:23.649028  487926 main.go:141] libmachine: (multinode-405494) Calling .GetSSHUsername
	I0116 02:57:23.649307  487926 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/multinode-405494/id_rsa Username:docker}
	I0116 02:57:23.661791  487926 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33879
	I0116 02:57:23.662264  487926 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:57:23.662811  487926 main.go:141] libmachine: Using API Version  1
	I0116 02:57:23.662843  487926 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:57:23.663366  487926 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:57:23.663575  487926 main.go:141] libmachine: (multinode-405494) Calling .GetState
	I0116 02:57:23.665021  487926 round_trippers.go:574] Response Status: 200 OK in 44 milliseconds
	I0116 02:57:23.665039  487926 round_trippers.go:577] Response Headers:
	I0116 02:57:23.665049  487926 round_trippers.go:580]     Audit-Id: a6c3ba9f-78b9-4b7f-8b57-041b66d5e02e
	I0116 02:57:23.665058  487926 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:23.665067  487926 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:23.665079  487926 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 02:57:23.665092  487926 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 02:57:23.665106  487926 round_trippers.go:580]     Content-Length: 291
	I0116 02:57:23.665115  487926 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:23 GMT
	I0116 02:57:23.665544  487926 main.go:141] libmachine: (multinode-405494) Calling .DriverName
	I0116 02:57:23.665848  487926 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0116 02:57:23.665864  487926 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0116 02:57:23.665887  487926 main.go:141] libmachine: (multinode-405494) Calling .GetSSHHostname
	I0116 02:57:23.667954  487926 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"dd77c785-c90f-4789-97cb-f593b7a7a7e2","resourceVersion":"367","creationTimestamp":"2024-01-16T02:57:11Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0116 02:57:23.668769  487926 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 02:57:23.669134  487926 main.go:141] libmachine: (multinode-405494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:49:7b", ip: ""} in network mk-multinode-405494: {Iface:virbr1 ExpiryTime:2024-01-16 03:56:41 +0000 UTC Type:0 Mac:52:54:00:b0:49:7b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:multinode-405494 Clientid:01:52:54:00:b0:49:7b}
	I0116 02:57:23.669168  487926 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined IP address 192.168.39.70 and MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 02:57:23.669285  487926 main.go:141] libmachine: (multinode-405494) Calling .GetSSHPort
	I0116 02:57:23.669487  487926 main.go:141] libmachine: (multinode-405494) Calling .GetSSHKeyPath
	I0116 02:57:23.669657  487926 main.go:141] libmachine: (multinode-405494) Calling .GetSSHUsername
	I0116 02:57:23.669806  487926 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/multinode-405494/id_rsa Username:docker}
	I0116 02:57:23.804749  487926 command_runner.go:130] > apiVersion: v1
	I0116 02:57:23.804781  487926 command_runner.go:130] > data:
	I0116 02:57:23.804788  487926 command_runner.go:130] >   Corefile: |
	I0116 02:57:23.804793  487926 command_runner.go:130] >     .:53 {
	I0116 02:57:23.804800  487926 command_runner.go:130] >         errors
	I0116 02:57:23.804808  487926 command_runner.go:130] >         health {
	I0116 02:57:23.804814  487926 command_runner.go:130] >            lameduck 5s
	I0116 02:57:23.804820  487926 command_runner.go:130] >         }
	I0116 02:57:23.804826  487926 command_runner.go:130] >         ready
	I0116 02:57:23.804835  487926 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0116 02:57:23.804852  487926 command_runner.go:130] >            pods insecure
	I0116 02:57:23.804863  487926 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0116 02:57:23.804872  487926 command_runner.go:130] >            ttl 30
	I0116 02:57:23.804877  487926 command_runner.go:130] >         }
	I0116 02:57:23.804884  487926 command_runner.go:130] >         prometheus :9153
	I0116 02:57:23.804891  487926 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0116 02:57:23.804900  487926 command_runner.go:130] >            max_concurrent 1000
	I0116 02:57:23.804906  487926 command_runner.go:130] >         }
	I0116 02:57:23.804914  487926 command_runner.go:130] >         cache 30
	I0116 02:57:23.804920  487926 command_runner.go:130] >         loop
	I0116 02:57:23.804929  487926 command_runner.go:130] >         reload
	I0116 02:57:23.804935  487926 command_runner.go:130] >         loadbalance
	I0116 02:57:23.804941  487926 command_runner.go:130] >     }
	I0116 02:57:23.804948  487926 command_runner.go:130] > kind: ConfigMap
	I0116 02:57:23.804956  487926 command_runner.go:130] > metadata:
	I0116 02:57:23.804968  487926 command_runner.go:130] >   creationTimestamp: "2024-01-16T02:57:11Z"
	I0116 02:57:23.804977  487926 command_runner.go:130] >   name: coredns
	I0116 02:57:23.804982  487926 command_runner.go:130] >   namespace: kube-system
	I0116 02:57:23.805020  487926 command_runner.go:130] >   resourceVersion: "271"
	I0116 02:57:23.805041  487926 command_runner.go:130] >   uid: 10412523-6dfe-4aad-b001-dd354ac18003
	I0116 02:57:23.807255  487926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0116 02:57:23.819744  487926 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 02:57:23.855871  487926 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0116 02:57:24.094546  487926 round_trippers.go:463] GET https://192.168.39.70:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0116 02:57:24.094579  487926 round_trippers.go:469] Request Headers:
	I0116 02:57:24.094592  487926 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:24.094602  487926 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:24.117593  487926 round_trippers.go:574] Response Status: 200 OK in 22 milliseconds
	I0116 02:57:24.117714  487926 round_trippers.go:577] Response Headers:
	I0116 02:57:24.117753  487926 round_trippers.go:580]     Content-Length: 291
	I0116 02:57:24.117764  487926 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:24 GMT
	I0116 02:57:24.117770  487926 round_trippers.go:580]     Audit-Id: afa7206f-d295-41c5-9f43-f19b05ab3083
	I0116 02:57:24.117776  487926 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:24.117783  487926 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:24.117791  487926 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 02:57:24.117805  487926 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 02:57:24.117841  487926 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"dd77c785-c90f-4789-97cb-f593b7a7a7e2","resourceVersion":"382","creationTimestamp":"2024-01-16T02:57:11Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0116 02:57:24.118003  487926 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-405494" context rescaled to 1 replicas
	I0116 02:57:24.118044  487926 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.70 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 02:57:24.121168  487926 out.go:177] * Verifying Kubernetes components...
	I0116 02:57:24.122655  487926 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 02:57:24.630772  487926 command_runner.go:130] > configmap/coredns replaced
	I0116 02:57:24.630831  487926 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0116 02:57:24.780411  487926 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0116 02:57:24.790219  487926 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0116 02:57:24.803999  487926 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0116 02:57:24.815561  487926 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0116 02:57:24.825571  487926 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0116 02:57:24.841882  487926 command_runner.go:130] > pod/storage-provisioner created
	I0116 02:57:24.844703  487926 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0116 02:57:24.844775  487926 main.go:141] libmachine: Making call to close driver server
	I0116 02:57:24.844794  487926 main.go:141] libmachine: (multinode-405494) Calling .Close
	I0116 02:57:24.844796  487926 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.025006949s)
	I0116 02:57:24.844845  487926 main.go:141] libmachine: Making call to close driver server
	I0116 02:57:24.844862  487926 main.go:141] libmachine: (multinode-405494) Calling .Close
	I0116 02:57:24.845167  487926 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:57:24.845195  487926 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:57:24.845220  487926 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:57:24.845244  487926 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:57:24.845255  487926 main.go:141] libmachine: Making call to close driver server
	I0116 02:57:24.845275  487926 main.go:141] libmachine: (multinode-405494) Calling .Close
	I0116 02:57:24.845336  487926 main.go:141] libmachine: Making call to close driver server
	I0116 02:57:24.845350  487926 main.go:141] libmachine: (multinode-405494) Calling .Close
	I0116 02:57:24.845496  487926 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:57:24.845511  487926 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:57:24.845573  487926 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17965-468241/kubeconfig
	I0116 02:57:24.845932  487926 kapi.go:59] client config for multinode-405494: &rest.Config{Host:"https://192.168.39.70:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17965-468241/.minikube/profiles/multinode-405494/client.crt", KeyFile:"/home/jenkins/minikube-integration/17965-468241/.minikube/profiles/multinode-405494/client.key", CAFile:"/home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19be0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0116 02:57:24.846315  487926 node_ready.go:35] waiting up to 6m0s for node "multinode-405494" to be "Ready" ...
	I0116 02:57:24.846438  487926 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494
	I0116 02:57:24.846449  487926 round_trippers.go:469] Request Headers:
	I0116 02:57:24.846460  487926 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:24.846473  487926 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:24.846767  487926 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:57:24.846789  487926 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:57:24.846870  487926 round_trippers.go:463] GET https://192.168.39.70:8443/apis/storage.k8s.io/v1/storageclasses
	I0116 02:57:24.846885  487926 round_trippers.go:469] Request Headers:
	I0116 02:57:24.846894  487926 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:24.846902  487926 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:24.856670  487926 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0116 02:57:24.856695  487926 round_trippers.go:577] Response Headers:
	I0116 02:57:24.856703  487926 round_trippers.go:580]     Audit-Id: bb1be42b-e4c8-4e83-9490-f56de16b618e
	I0116 02:57:24.856709  487926 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:24.856714  487926 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:24.856719  487926 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 02:57:24.856724  487926 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 02:57:24.856729  487926 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:24 GMT
	I0116 02:57:24.856854  487926 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","resourceVersion":"352","creationTimestamp":"2024-01-16T02:57:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_57_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:57:07Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0116 02:57:24.857447  487926 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0116 02:57:24.857467  487926 round_trippers.go:577] Response Headers:
	I0116 02:57:24.857474  487926 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 02:57:24.857480  487926 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 02:57:24.857486  487926 round_trippers.go:580]     Content-Length: 1273
	I0116 02:57:24.857492  487926 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:24 GMT
	I0116 02:57:24.857497  487926 round_trippers.go:580]     Audit-Id: 1cb24b92-2eae-4607-ae56-9ab300b9f503
	I0116 02:57:24.857520  487926 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:24.857534  487926 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:24.857592  487926 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"402"},"items":[{"metadata":{"name":"standard","uid":"398e7f72-846d-4ae8-8f5c-d81f92ebcc39","resourceVersion":"393","creationTimestamp":"2024-01-16T02:57:24Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-01-16T02:57:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0116 02:57:24.858002  487926 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"398e7f72-846d-4ae8-8f5c-d81f92ebcc39","resourceVersion":"393","creationTimestamp":"2024-01-16T02:57:24Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-01-16T02:57:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0116 02:57:24.858061  487926 round_trippers.go:463] PUT https://192.168.39.70:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0116 02:57:24.858069  487926 round_trippers.go:469] Request Headers:
	I0116 02:57:24.858077  487926 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:24.858083  487926 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:24.858092  487926 round_trippers.go:473]     Content-Type: application/json
	I0116 02:57:24.862292  487926 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 02:57:24.862315  487926 round_trippers.go:577] Response Headers:
	I0116 02:57:24.862323  487926 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:24.862331  487926 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 02:57:24.862337  487926 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 02:57:24.862342  487926 round_trippers.go:580]     Content-Length: 1220
	I0116 02:57:24.862347  487926 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:24 GMT
	I0116 02:57:24.862352  487926 round_trippers.go:580]     Audit-Id: fcacd170-9a94-484d-8ec5-5f5276f66151
	I0116 02:57:24.862357  487926 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:24.862382  487926 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"398e7f72-846d-4ae8-8f5c-d81f92ebcc39","resourceVersion":"393","creationTimestamp":"2024-01-16T02:57:24Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-01-16T02:57:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0116 02:57:24.862526  487926 main.go:141] libmachine: Making call to close driver server
	I0116 02:57:24.862551  487926 main.go:141] libmachine: (multinode-405494) Calling .Close
	I0116 02:57:24.862851  487926 main.go:141] libmachine: Successfully made call to close driver server
	I0116 02:57:24.862870  487926 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 02:57:24.864969  487926 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0116 02:57:24.866446  487926 addons.go:505] enable addons completed in 1.274222263s: enabled=[storage-provisioner default-storageclass]
	I0116 02:57:25.347441  487926 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494
	I0116 02:57:25.347469  487926 round_trippers.go:469] Request Headers:
	I0116 02:57:25.347477  487926 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:25.347483  487926 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:25.350893  487926 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:57:25.350921  487926 round_trippers.go:577] Response Headers:
	I0116 02:57:25.350929  487926 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 02:57:25.350936  487926 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 02:57:25.350941  487926 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:25 GMT
	I0116 02:57:25.350946  487926 round_trippers.go:580]     Audit-Id: 7170b452-d2e6-4661-875d-a7532b0860f7
	I0116 02:57:25.350951  487926 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:25.350957  487926 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:25.351090  487926 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","resourceVersion":"352","creationTimestamp":"2024-01-16T02:57:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_57_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:57:07Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0116 02:57:25.846650  487926 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494
	I0116 02:57:25.846681  487926 round_trippers.go:469] Request Headers:
	I0116 02:57:25.846691  487926 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:25.846697  487926 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:25.849475  487926 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:25.849506  487926 round_trippers.go:577] Response Headers:
	I0116 02:57:25.849525  487926 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:25 GMT
	I0116 02:57:25.849534  487926 round_trippers.go:580]     Audit-Id: fdf94960-86a7-4676-90f3-b97e0d38d1fa
	I0116 02:57:25.849545  487926 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:25.849554  487926 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:25.849563  487926 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 02:57:25.849571  487926 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 02:57:25.849727  487926 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","resourceVersion":"352","creationTimestamp":"2024-01-16T02:57:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_57_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:57:07Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0116 02:57:26.346950  487926 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494
	I0116 02:57:26.346979  487926 round_trippers.go:469] Request Headers:
	I0116 02:57:26.346987  487926 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:26.346993  487926 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:26.350203  487926 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:57:26.350226  487926 round_trippers.go:577] Response Headers:
	I0116 02:57:26.350233  487926 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 02:57:26.350238  487926 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 02:57:26.350244  487926 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:26 GMT
	I0116 02:57:26.350258  487926 round_trippers.go:580]     Audit-Id: da2a77b3-c019-4a72-b3b1-b52ec2fe52df
	I0116 02:57:26.350269  487926 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:26.350276  487926 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:26.350528  487926 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","resourceVersion":"352","creationTimestamp":"2024-01-16T02:57:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_57_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:57:07Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0116 02:57:26.846665  487926 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494
	I0116 02:57:26.846711  487926 round_trippers.go:469] Request Headers:
	I0116 02:57:26.846722  487926 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:26.846731  487926 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:26.849966  487926 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:57:26.850001  487926 round_trippers.go:577] Response Headers:
	I0116 02:57:26.850014  487926 round_trippers.go:580]     Audit-Id: 3f2e3ee0-9bf7-41fa-b3eb-63dca2549f25
	I0116 02:57:26.850022  487926 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:26.850031  487926 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:26.850041  487926 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 02:57:26.850053  487926 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 02:57:26.850059  487926 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:26 GMT
	I0116 02:57:26.850295  487926 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","resourceVersion":"352","creationTimestamp":"2024-01-16T02:57:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_57_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:57:07Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0116 02:57:26.850765  487926 node_ready.go:58] node "multinode-405494" has status "Ready":"False"
	I0116 02:57:27.347010  487926 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494
	I0116 02:57:27.347034  487926 round_trippers.go:469] Request Headers:
	I0116 02:57:27.347043  487926 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:27.347049  487926 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:27.350111  487926 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:57:27.350144  487926 round_trippers.go:577] Response Headers:
	I0116 02:57:27.350155  487926 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:27 GMT
	I0116 02:57:27.350164  487926 round_trippers.go:580]     Audit-Id: 7339e20d-d647-4728-a5fc-8568f7bd261e
	I0116 02:57:27.350172  487926 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:27.350181  487926 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:27.350190  487926 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 02:57:27.350207  487926 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 02:57:27.351096  487926 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","resourceVersion":"352","creationTimestamp":"2024-01-16T02:57:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_57_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:57:07Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0116 02:57:27.846681  487926 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494
	I0116 02:57:27.846709  487926 round_trippers.go:469] Request Headers:
	I0116 02:57:27.846718  487926 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:27.846724  487926 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:27.849223  487926 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:27.849249  487926 round_trippers.go:577] Response Headers:
	I0116 02:57:27.849260  487926 round_trippers.go:580]     Audit-Id: d227f66b-d64c-4986-8056-736b1effcebe
	I0116 02:57:27.849266  487926 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:27.849271  487926 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:27.849280  487926 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 02:57:27.849286  487926 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 02:57:27.849292  487926 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:27 GMT
	I0116 02:57:27.849596  487926 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","resourceVersion":"352","creationTimestamp":"2024-01-16T02:57:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_57_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:57:07Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0116 02:57:28.347366  487926 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494
	I0116 02:57:28.347393  487926 round_trippers.go:469] Request Headers:
	I0116 02:57:28.347402  487926 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:28.347408  487926 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:28.352219  487926 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 02:57:28.352244  487926 round_trippers.go:577] Response Headers:
	I0116 02:57:28.352252  487926 round_trippers.go:580]     Audit-Id: 39e13148-633f-4c3b-9127-4fca4bc47a7f
	I0116 02:57:28.352258  487926 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:28.352264  487926 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:28.352275  487926 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 02:57:28.352285  487926 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 02:57:28.352293  487926 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:28 GMT
	I0116 02:57:28.352677  487926 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","resourceVersion":"352","creationTimestamp":"2024-01-16T02:57:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_57_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:57:07Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0116 02:57:28.846942  487926 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494
	I0116 02:57:28.846992  487926 round_trippers.go:469] Request Headers:
	I0116 02:57:28.847002  487926 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:28.847008  487926 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:28.849932  487926 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:28.849951  487926 round_trippers.go:577] Response Headers:
	I0116 02:57:28.849958  487926 round_trippers.go:580]     Audit-Id: d74786ac-aa19-4914-9354-2e74c70e642a
	I0116 02:57:28.849964  487926 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:28.849968  487926 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:28.849973  487926 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 02:57:28.849978  487926 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 02:57:28.849983  487926 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:28 GMT
	I0116 02:57:28.850652  487926 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","resourceVersion":"352","creationTimestamp":"2024-01-16T02:57:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_57_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:57:07Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0116 02:57:28.851032  487926 node_ready.go:58] node "multinode-405494" has status "Ready":"False"
	I0116 02:57:29.347421  487926 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494
	I0116 02:57:29.347445  487926 round_trippers.go:469] Request Headers:
	I0116 02:57:29.347453  487926 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:29.347460  487926 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:29.350524  487926 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:57:29.350549  487926 round_trippers.go:577] Response Headers:
	I0116 02:57:29.350557  487926 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:29.350563  487926 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:29.350568  487926 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 02:57:29.350573  487926 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 02:57:29.350578  487926 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:29 GMT
	I0116 02:57:29.350585  487926 round_trippers.go:580]     Audit-Id: f9b61bae-ec6d-4910-b870-36bd253547b0
	I0116 02:57:29.350992  487926 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","resourceVersion":"417","creationTimestamp":"2024-01-16T02:57:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_57_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:57:07Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0116 02:57:29.351425  487926 node_ready.go:49] node "multinode-405494" has status "Ready":"True"
	I0116 02:57:29.351448  487926 node_ready.go:38] duration metric: took 4.505095629s waiting for node "multinode-405494" to be "Ready" ...
	I0116 02:57:29.351460  487926 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 02:57:29.351563  487926 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods
	I0116 02:57:29.351576  487926 round_trippers.go:469] Request Headers:
	I0116 02:57:29.351584  487926 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:29.351596  487926 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:29.355946  487926 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 02:57:29.355969  487926 round_trippers.go:577] Response Headers:
	I0116 02:57:29.355980  487926 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:29 GMT
	I0116 02:57:29.355988  487926 round_trippers.go:580]     Audit-Id: 95aa7f70-7093-4c42-9bfe-3f09c41399ec
	I0116 02:57:29.355995  487926 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:29.356007  487926 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:29.356015  487926 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 02:57:29.356048  487926 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 02:57:29.357691  487926 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"423"},"items":[{"metadata":{"name":"coredns-5dd5756b68-vwqvk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"096151e2-c59c-4dcf-bd29-2029901902c9","resourceVersion":"423","creationTimestamp":"2024-01-16T02:57:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c3967b3a-de7c-4ccd-af3a-dd2a9e8b71f8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c3967b3a-de7c-4ccd-af3a-dd2a9e8b71f8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53878 chars]
	I0116 02:57:29.360794  487926 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-vwqvk" in "kube-system" namespace to be "Ready" ...
	I0116 02:57:29.360870  487926 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-vwqvk
	I0116 02:57:29.360878  487926 round_trippers.go:469] Request Headers:
	I0116 02:57:29.360885  487926 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:29.360891  487926 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:29.362853  487926 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0116 02:57:29.362874  487926 round_trippers.go:577] Response Headers:
	I0116 02:57:29.362884  487926 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:29.362890  487926 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:29.362898  487926 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 02:57:29.362903  487926 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 02:57:29.362912  487926 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:29 GMT
	I0116 02:57:29.362922  487926 round_trippers.go:580]     Audit-Id: 0b17577c-a924-4d5e-9923-3f751aff206b
	I0116 02:57:29.363092  487926 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-vwqvk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"096151e2-c59c-4dcf-bd29-2029901902c9","resourceVersion":"423","creationTimestamp":"2024-01-16T02:57:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c3967b3a-de7c-4ccd-af3a-dd2a9e8b71f8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c3967b3a-de7c-4ccd-af3a-dd2a9e8b71f8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I0116 02:57:29.363586  487926 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494
	I0116 02:57:29.363602  487926 round_trippers.go:469] Request Headers:
	I0116 02:57:29.363609  487926 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:29.363615  487926 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:29.365749  487926 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:29.365763  487926 round_trippers.go:577] Response Headers:
	I0116 02:57:29.365769  487926 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 02:57:29.365779  487926 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 02:57:29.365784  487926 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:29 GMT
	I0116 02:57:29.365789  487926 round_trippers.go:580]     Audit-Id: 3584ff9b-60b6-4173-b3a8-55f8d9e72cfb
	I0116 02:57:29.365794  487926 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:29.365799  487926 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:29.365969  487926 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","resourceVersion":"417","creationTimestamp":"2024-01-16T02:57:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_57_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:57:07Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0116 02:57:29.861797  487926 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-vwqvk
	I0116 02:57:29.861828  487926 round_trippers.go:469] Request Headers:
	I0116 02:57:29.861838  487926 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:29.861851  487926 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:29.864651  487926 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:29.864683  487926 round_trippers.go:577] Response Headers:
	I0116 02:57:29.864694  487926 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 02:57:29.864701  487926 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:29 GMT
	I0116 02:57:29.864708  487926 round_trippers.go:580]     Audit-Id: 3811e2d4-c3f8-4ac2-8176-202e11223477
	I0116 02:57:29.864716  487926 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:29.864723  487926 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:29.864731  487926 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 02:57:29.864858  487926 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-vwqvk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"096151e2-c59c-4dcf-bd29-2029901902c9","resourceVersion":"423","creationTimestamp":"2024-01-16T02:57:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c3967b3a-de7c-4ccd-af3a-dd2a9e8b71f8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c3967b3a-de7c-4ccd-af3a-dd2a9e8b71f8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I0116 02:57:29.865312  487926 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494
	I0116 02:57:29.865325  487926 round_trippers.go:469] Request Headers:
	I0116 02:57:29.865332  487926 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:29.865338  487926 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:29.868791  487926 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:57:29.868817  487926 round_trippers.go:577] Response Headers:
	I0116 02:57:29.868825  487926 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:29 GMT
	I0116 02:57:29.868833  487926 round_trippers.go:580]     Audit-Id: 0f25b050-1c2e-405c-a5bd-ab93aa30d956
	I0116 02:57:29.868841  487926 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:29.868849  487926 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:29.868857  487926 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 02:57:29.868865  487926 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 02:57:29.869019  487926 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","resourceVersion":"417","creationTimestamp":"2024-01-16T02:57:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_57_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:57:07Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0116 02:57:30.361366  487926 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-vwqvk
	I0116 02:57:30.361399  487926 round_trippers.go:469] Request Headers:
	I0116 02:57:30.361412  487926 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:30.361421  487926 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:30.364765  487926 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:57:30.364789  487926 round_trippers.go:577] Response Headers:
	I0116 02:57:30.364797  487926 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:30.364803  487926 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 02:57:30.364808  487926 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 02:57:30.364813  487926 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:30 GMT
	I0116 02:57:30.364820  487926 round_trippers.go:580]     Audit-Id: 93175e5c-df05-45db-9a74-343adb1f993b
	I0116 02:57:30.364828  487926 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:30.365057  487926 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-vwqvk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"096151e2-c59c-4dcf-bd29-2029901902c9","resourceVersion":"423","creationTimestamp":"2024-01-16T02:57:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c3967b3a-de7c-4ccd-af3a-dd2a9e8b71f8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c3967b3a-de7c-4ccd-af3a-dd2a9e8b71f8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I0116 02:57:30.365557  487926 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494
	I0116 02:57:30.365573  487926 round_trippers.go:469] Request Headers:
	I0116 02:57:30.365581  487926 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:30.365587  487926 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:30.368309  487926 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:30.368327  487926 round_trippers.go:577] Response Headers:
	I0116 02:57:30.368333  487926 round_trippers.go:580]     Audit-Id: 7717a132-04f1-4574-b3d7-7c403a956c7a
	I0116 02:57:30.368340  487926 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:30.368345  487926 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:30.368350  487926 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 02:57:30.368355  487926 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 02:57:30.368360  487926 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:30 GMT
	I0116 02:57:30.368621  487926 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","resourceVersion":"417","creationTimestamp":"2024-01-16T02:57:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_57_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:57:07Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0116 02:57:30.861250  487926 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-vwqvk
	I0116 02:57:30.861277  487926 round_trippers.go:469] Request Headers:
	I0116 02:57:30.861286  487926 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:30.861292  487926 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:30.864596  487926 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:57:30.864626  487926 round_trippers.go:577] Response Headers:
	I0116 02:57:30.864637  487926 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 02:57:30.864681  487926 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 02:57:30.864709  487926 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:30 GMT
	I0116 02:57:30.864727  487926 round_trippers.go:580]     Audit-Id: a64de787-606a-4442-93c0-99de8b534c9e
	I0116 02:57:30.864733  487926 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:30.864739  487926 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:30.864888  487926 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-vwqvk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"096151e2-c59c-4dcf-bd29-2029901902c9","resourceVersion":"423","creationTimestamp":"2024-01-16T02:57:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c3967b3a-de7c-4ccd-af3a-dd2a9e8b71f8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c3967b3a-de7c-4ccd-af3a-dd2a9e8b71f8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I0116 02:57:30.865506  487926 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494
	I0116 02:57:30.865526  487926 round_trippers.go:469] Request Headers:
	I0116 02:57:30.865533  487926 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:30.865540  487926 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:30.868313  487926 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:30.868332  487926 round_trippers.go:577] Response Headers:
	I0116 02:57:30.868341  487926 round_trippers.go:580]     Audit-Id: 721fcb78-d7ed-4485-a03b-57ae3cd3ebad
	I0116 02:57:30.868350  487926 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:30.868357  487926 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:30.868365  487926 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 02:57:30.868374  487926 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 02:57:30.868383  487926 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:30 GMT
	I0116 02:57:30.868711  487926 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","resourceVersion":"417","creationTimestamp":"2024-01-16T02:57:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_57_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:57:07Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0116 02:57:31.361230  487926 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-vwqvk
	I0116 02:57:31.361253  487926 round_trippers.go:469] Request Headers:
	I0116 02:57:31.361262  487926 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:31.361272  487926 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:31.364352  487926 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:57:31.364375  487926 round_trippers.go:577] Response Headers:
	I0116 02:57:31.364382  487926 round_trippers.go:580]     Audit-Id: 36914d49-0526-45dd-ac73-c9af10086f60
	I0116 02:57:31.364388  487926 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:31.364393  487926 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:31.364398  487926 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 02:57:31.364403  487926 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 02:57:31.364408  487926 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:31 GMT
	I0116 02:57:31.364595  487926 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-vwqvk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"096151e2-c59c-4dcf-bd29-2029901902c9","resourceVersion":"434","creationTimestamp":"2024-01-16T02:57:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c3967b3a-de7c-4ccd-af3a-dd2a9e8b71f8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c3967b3a-de7c-4ccd-af3a-dd2a9e8b71f8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6264 chars]
	I0116 02:57:31.365093  487926 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494
	I0116 02:57:31.365109  487926 round_trippers.go:469] Request Headers:
	I0116 02:57:31.365117  487926 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:31.365123  487926 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:31.367783  487926 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:31.367803  487926 round_trippers.go:577] Response Headers:
	I0116 02:57:31.367812  487926 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:31.367820  487926 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 02:57:31.367828  487926 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 02:57:31.367835  487926 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:31 GMT
	I0116 02:57:31.367842  487926 round_trippers.go:580]     Audit-Id: 50742306-195e-4fcd-b53e-3d6fe20740f9
	I0116 02:57:31.367849  487926 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:31.368381  487926 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","resourceVersion":"417","creationTimestamp":"2024-01-16T02:57:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_57_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:57:07Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0116 02:57:31.368675  487926 pod_ready.go:92] pod "coredns-5dd5756b68-vwqvk" in "kube-system" namespace has status "Ready":"True"
	I0116 02:57:31.368690  487926 pod_ready.go:81] duration metric: took 2.007874437s waiting for pod "coredns-5dd5756b68-vwqvk" in "kube-system" namespace to be "Ready" ...
	I0116 02:57:31.368699  487926 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-405494" in "kube-system" namespace to be "Ready" ...
	I0116 02:57:31.368754  487926 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-405494
	I0116 02:57:31.368762  487926 round_trippers.go:469] Request Headers:
	I0116 02:57:31.368768  487926 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:31.368774  487926 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:31.372345  487926 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:57:31.372359  487926 round_trippers.go:577] Response Headers:
	I0116 02:57:31.372369  487926 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:31 GMT
	I0116 02:57:31.372374  487926 round_trippers.go:580]     Audit-Id: 6f2d7c99-5a97-4ced-b433-bc5d7f197a4f
	I0116 02:57:31.372379  487926 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:31.372384  487926 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:31.372389  487926 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 02:57:31.372394  487926 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 02:57:31.372643  487926 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-405494","namespace":"kube-system","uid":"3f839da7-c0c0-4546-8848-1557cbf50722","resourceVersion":"311","creationTimestamp":"2024-01-16T02:57:12Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.70:2379","kubernetes.io/config.hash":"3a9e67d0e87fe64d9531234ab850034d","kubernetes.io/config.mirror":"3a9e67d0e87fe64d9531234ab850034d","kubernetes.io/config.seen":"2024-01-16T02:57:11.711592151Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5843 chars]
	I0116 02:57:31.373029  487926 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494
	I0116 02:57:31.373042  487926 round_trippers.go:469] Request Headers:
	I0116 02:57:31.373048  487926 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:31.373053  487926 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:31.374924  487926 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0116 02:57:31.374937  487926 round_trippers.go:577] Response Headers:
	I0116 02:57:31.374943  487926 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:31.374948  487926 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:31.374953  487926 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 02:57:31.374958  487926 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 02:57:31.374963  487926 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:31 GMT
	I0116 02:57:31.374975  487926 round_trippers.go:580]     Audit-Id: fa5e3b1e-c2ea-4649-9968-4f70b6c14ebc
	I0116 02:57:31.375275  487926 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","resourceVersion":"417","creationTimestamp":"2024-01-16T02:57:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_57_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:57:07Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0116 02:57:31.375556  487926 pod_ready.go:92] pod "etcd-multinode-405494" in "kube-system" namespace has status "Ready":"True"
	I0116 02:57:31.375571  487926 pod_ready.go:81] duration metric: took 6.866496ms waiting for pod "etcd-multinode-405494" in "kube-system" namespace to be "Ready" ...
	I0116 02:57:31.375581  487926 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-405494" in "kube-system" namespace to be "Ready" ...
	I0116 02:57:31.375642  487926 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-405494
	I0116 02:57:31.375652  487926 round_trippers.go:469] Request Headers:
	I0116 02:57:31.375658  487926 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:31.375663  487926 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:31.377848  487926 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:31.377868  487926 round_trippers.go:577] Response Headers:
	I0116 02:57:31.377877  487926 round_trippers.go:580]     Audit-Id: 43ba6dc9-fa72-4e1a-97c9-970ab40aa4f4
	I0116 02:57:31.377884  487926 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:31.377892  487926 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:31.377900  487926 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 02:57:31.377909  487926 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 02:57:31.377917  487926 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:31 GMT
	I0116 02:57:31.378190  487926 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-405494","namespace":"kube-system","uid":"e242d3cf-6cf7-4b47-8d3e-a83e484554a1","resourceVersion":"316","creationTimestamp":"2024-01-16T02:57:09Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.70:8443","kubernetes.io/config.hash":"04bffd1a6d3ee0aae068c41e37830c9b","kubernetes.io/config.mirror":"04bffd1a6d3ee0aae068c41e37830c9b","kubernetes.io/config.seen":"2024-01-16T02:57:02.078602539Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7380 chars]
	I0116 02:57:31.378558  487926 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494
	I0116 02:57:31.378571  487926 round_trippers.go:469] Request Headers:
	I0116 02:57:31.378578  487926 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:31.378584  487926 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:31.380280  487926 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0116 02:57:31.380298  487926 round_trippers.go:577] Response Headers:
	I0116 02:57:31.380306  487926 round_trippers.go:580]     Audit-Id: 4b17ca18-b5b4-4bfb-8265-69f051a3dc3a
	I0116 02:57:31.380315  487926 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:31.380323  487926 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:31.380331  487926 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 02:57:31.380340  487926 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 02:57:31.380348  487926 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:31 GMT
	I0116 02:57:31.380620  487926 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","resourceVersion":"417","creationTimestamp":"2024-01-16T02:57:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_57_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:57:07Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0116 02:57:31.380924  487926 pod_ready.go:92] pod "kube-apiserver-multinode-405494" in "kube-system" namespace has status "Ready":"True"
	I0116 02:57:31.380941  487926 pod_ready.go:81] duration metric: took 5.353718ms waiting for pod "kube-apiserver-multinode-405494" in "kube-system" namespace to be "Ready" ...
	I0116 02:57:31.380949  487926 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-405494" in "kube-system" namespace to be "Ready" ...
	I0116 02:57:31.381007  487926 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-405494
	I0116 02:57:31.381019  487926 round_trippers.go:469] Request Headers:
	I0116 02:57:31.381031  487926 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:31.381047  487926 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:31.383063  487926 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:31.383078  487926 round_trippers.go:577] Response Headers:
	I0116 02:57:31.383086  487926 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:31 GMT
	I0116 02:57:31.383094  487926 round_trippers.go:580]     Audit-Id: 225db0fd-9bb9-4b57-9692-c988e6a2675a
	I0116 02:57:31.383101  487926 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:31.383109  487926 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:31.383117  487926 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 02:57:31.383125  487926 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 02:57:31.383347  487926 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-405494","namespace":"kube-system","uid":"0833b412-8909-4660-8e16-19701683358e","resourceVersion":"319","creationTimestamp":"2024-01-16T02:57:12Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"9eb78063d6e219f3cc5940494bdab4b2","kubernetes.io/config.mirror":"9eb78063d6e219f3cc5940494bdab4b2","kubernetes.io/config.seen":"2024-01-16T02:57:11.711589408Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6950 chars]
	I0116 02:57:31.383837  487926 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494
	I0116 02:57:31.383855  487926 round_trippers.go:469] Request Headers:
	I0116 02:57:31.383866  487926 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:31.383876  487926 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:31.385836  487926 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0116 02:57:31.385854  487926 round_trippers.go:577] Response Headers:
	I0116 02:57:31.385866  487926 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 02:57:31.385875  487926 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 02:57:31.385882  487926 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:31 GMT
	I0116 02:57:31.385889  487926 round_trippers.go:580]     Audit-Id: 064e989e-3d9d-4182-9996-843ff51c35e0
	I0116 02:57:31.385897  487926 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:31.385906  487926 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:31.386184  487926 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","resourceVersion":"417","creationTimestamp":"2024-01-16T02:57:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_57_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:57:07Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0116 02:57:31.386480  487926 pod_ready.go:92] pod "kube-controller-manager-multinode-405494" in "kube-system" namespace has status "Ready":"True"
	I0116 02:57:31.386495  487926 pod_ready.go:81] duration metric: took 5.540009ms waiting for pod "kube-controller-manager-multinode-405494" in "kube-system" namespace to be "Ready" ...
	I0116 02:57:31.386503  487926 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gg8kv" in "kube-system" namespace to be "Ready" ...
	I0116 02:57:31.386567  487926 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gg8kv
	I0116 02:57:31.386578  487926 round_trippers.go:469] Request Headers:
	I0116 02:57:31.386591  487926 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:31.386601  487926 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:31.388552  487926 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0116 02:57:31.388570  487926 round_trippers.go:577] Response Headers:
	I0116 02:57:31.388579  487926 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:31.388587  487926 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 02:57:31.388594  487926 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 02:57:31.388603  487926 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:31 GMT
	I0116 02:57:31.388613  487926 round_trippers.go:580]     Audit-Id: dc9b897f-bd8e-4e71-9210-4c54e084363e
	I0116 02:57:31.388619  487926 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:31.388820  487926 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-gg8kv","generateName":"kube-proxy-","namespace":"kube-system","uid":"32841b88-1b06-46ed-b4ce-f73301ec0a85","resourceVersion":"407","creationTimestamp":"2024-01-16T02:57:23Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"fdad84fd-2af6-4360-a899-0f70f257935c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fdad84fd-2af6-4360-a899-0f70f257935c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5514 chars]
	I0116 02:57:31.389180  487926 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494
	I0116 02:57:31.389195  487926 round_trippers.go:469] Request Headers:
	I0116 02:57:31.389202  487926 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:31.389207  487926 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:31.390826  487926 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0116 02:57:31.390849  487926 round_trippers.go:577] Response Headers:
	I0116 02:57:31.390858  487926 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 02:57:31.390866  487926 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:31 GMT
	I0116 02:57:31.390873  487926 round_trippers.go:580]     Audit-Id: 611ae1db-8d03-4ae9-ac42-140417e4713a
	I0116 02:57:31.390881  487926 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:31.390889  487926 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:31.390895  487926 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 02:57:31.391233  487926 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","resourceVersion":"417","creationTimestamp":"2024-01-16T02:57:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_57_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:57:07Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0116 02:57:31.391509  487926 pod_ready.go:92] pod "kube-proxy-gg8kv" in "kube-system" namespace has status "Ready":"True"
	I0116 02:57:31.391524  487926 pod_ready.go:81] duration metric: took 5.015403ms waiting for pod "kube-proxy-gg8kv" in "kube-system" namespace to be "Ready" ...
	I0116 02:57:31.391532  487926 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-405494" in "kube-system" namespace to be "Ready" ...
	I0116 02:57:31.561977  487926 request.go:629] Waited for 170.35138ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-405494
	I0116 02:57:31.562057  487926 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-405494
	I0116 02:57:31.562063  487926 round_trippers.go:469] Request Headers:
	I0116 02:57:31.562071  487926 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:31.562077  487926 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:31.565470  487926 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:57:31.565503  487926 round_trippers.go:577] Response Headers:
	I0116 02:57:31.565516  487926 round_trippers.go:580]     Audit-Id: 150a6e69-a14e-4f2a-925b-56c727ec7279
	I0116 02:57:31.565525  487926 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:31.565531  487926 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:31.565536  487926 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 02:57:31.565541  487926 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 02:57:31.565547  487926 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:31 GMT
	I0116 02:57:31.565642  487926 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-405494","namespace":"kube-system","uid":"70c980cb-4ff9-45f5-960f-d8afa355229c","resourceVersion":"313","creationTimestamp":"2024-01-16T02:57:11Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"65069d20830c0b10a3d28746871e48c2","kubernetes.io/config.mirror":"65069d20830c0b10a3d28746871e48c2","kubernetes.io/config.seen":"2024-01-16T02:57:02.078604553Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4680 chars]
	I0116 02:57:31.761380  487926 request.go:629] Waited for 195.33157ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/nodes/multinode-405494
	I0116 02:57:31.761481  487926 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494
	I0116 02:57:31.761487  487926 round_trippers.go:469] Request Headers:
	I0116 02:57:31.761495  487926 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:31.761501  487926 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:31.764517  487926 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:31.764546  487926 round_trippers.go:577] Response Headers:
	I0116 02:57:31.764558  487926 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:31.764567  487926 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:31.764576  487926 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 02:57:31.764586  487926 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 02:57:31.764594  487926 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:31 GMT
	I0116 02:57:31.764601  487926 round_trippers.go:580]     Audit-Id: 0183b52a-72c1-45e3-a052-a383f267d933
	I0116 02:57:31.764869  487926 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","resourceVersion":"417","creationTimestamp":"2024-01-16T02:57:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_57_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:57:07Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0116 02:57:31.765206  487926 pod_ready.go:92] pod "kube-scheduler-multinode-405494" in "kube-system" namespace has status "Ready":"True"
	I0116 02:57:31.765224  487926 pod_ready.go:81] duration metric: took 373.686917ms waiting for pod "kube-scheduler-multinode-405494" in "kube-system" namespace to be "Ready" ...
	I0116 02:57:31.765236  487926 pod_ready.go:38] duration metric: took 2.413745265s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 02:57:31.765251  487926 api_server.go:52] waiting for apiserver process to appear ...
	I0116 02:57:31.765308  487926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 02:57:31.779991  487926 command_runner.go:130] > 1062
	I0116 02:57:31.780333  487926 api_server.go:72] duration metric: took 7.662247568s to wait for apiserver process to appear ...
	I0116 02:57:31.780354  487926 api_server.go:88] waiting for apiserver healthz status ...
	I0116 02:57:31.780379  487926 api_server.go:253] Checking apiserver healthz at https://192.168.39.70:8443/healthz ...
	I0116 02:57:31.788239  487926 api_server.go:279] https://192.168.39.70:8443/healthz returned 200:
	ok
	I0116 02:57:31.788329  487926 round_trippers.go:463] GET https://192.168.39.70:8443/version
	I0116 02:57:31.788334  487926 round_trippers.go:469] Request Headers:
	I0116 02:57:31.788342  487926 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:31.788349  487926 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:31.790946  487926 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:31.790976  487926 round_trippers.go:577] Response Headers:
	I0116 02:57:31.790987  487926 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 02:57:31.790996  487926 round_trippers.go:580]     Content-Length: 264
	I0116 02:57:31.791004  487926 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:31 GMT
	I0116 02:57:31.791013  487926 round_trippers.go:580]     Audit-Id: b6b52e50-d68b-403f-892a-83acf3b2a522
	I0116 02:57:31.791021  487926 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:31.791027  487926 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:31.791032  487926 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 02:57:31.791055  487926 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0116 02:57:31.791180  487926 api_server.go:141] control plane version: v1.28.4
	I0116 02:57:31.791205  487926 api_server.go:131] duration metric: took 10.843467ms to wait for apiserver health ...
	I0116 02:57:31.791215  487926 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 02:57:31.961362  487926 request.go:629] Waited for 170.046078ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods
	I0116 02:57:31.961461  487926 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods
	I0116 02:57:31.961469  487926 round_trippers.go:469] Request Headers:
	I0116 02:57:31.961497  487926 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:31.961507  487926 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:31.973455  487926 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0116 02:57:31.973483  487926 round_trippers.go:577] Response Headers:
	I0116 02:57:31.973491  487926 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:31.973497  487926 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 02:57:31.973502  487926 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 02:57:31.973507  487926 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:31 GMT
	I0116 02:57:31.973512  487926 round_trippers.go:580]     Audit-Id: a6ac8ab5-1a90-49f1-98d8-4968c3990711
	I0116 02:57:31.973517  487926 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:31.974148  487926 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"438"},"items":[{"metadata":{"name":"coredns-5dd5756b68-vwqvk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"096151e2-c59c-4dcf-bd29-2029901902c9","resourceVersion":"434","creationTimestamp":"2024-01-16T02:57:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c3967b3a-de7c-4ccd-af3a-dd2a9e8b71f8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c3967b3a-de7c-4ccd-af3a-dd2a9e8b71f8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53956 chars]
	I0116 02:57:31.976406  487926 system_pods.go:59] 8 kube-system pods found
	I0116 02:57:31.976442  487926 system_pods.go:61] "coredns-5dd5756b68-vwqvk" [096151e2-c59c-4dcf-bd29-2029901902c9] Running
	I0116 02:57:31.976450  487926 system_pods.go:61] "etcd-multinode-405494" [3f839da7-c0c0-4546-8848-1557cbf50722] Running
	I0116 02:57:31.976456  487926 system_pods.go:61] "kindnet-8t86n" [4d421823-26dd-467d-94d4-28387c8e3793] Running
	I0116 02:57:31.976463  487926 system_pods.go:61] "kube-apiserver-multinode-405494" [e242d3cf-6cf7-4b47-8d3e-a83e484554a1] Running
	I0116 02:57:31.976470  487926 system_pods.go:61] "kube-controller-manager-multinode-405494" [0833b412-8909-4660-8e16-19701683358e] Running
	I0116 02:57:31.976476  487926 system_pods.go:61] "kube-proxy-gg8kv" [32841b88-1b06-46ed-b4ce-f73301ec0a85] Running
	I0116 02:57:31.976483  487926 system_pods.go:61] "kube-scheduler-multinode-405494" [70c980cb-4ff9-45f5-960f-d8afa355229c] Running
	I0116 02:57:31.976492  487926 system_pods.go:61] "storage-provisioner" [c6f12cfa-46b3-4840-a7e2-258c063a19c2] Running
	I0116 02:57:31.976505  487926 system_pods.go:74] duration metric: took 185.283071ms to wait for pod list to return data ...
	I0116 02:57:31.976515  487926 default_sa.go:34] waiting for default service account to be created ...
	I0116 02:57:32.161993  487926 request.go:629] Waited for 185.378954ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/namespaces/default/serviceaccounts
	I0116 02:57:32.162108  487926 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/default/serviceaccounts
	I0116 02:57:32.162119  487926 round_trippers.go:469] Request Headers:
	I0116 02:57:32.162129  487926 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:32.162136  487926 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:32.165234  487926 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:57:32.165258  487926 round_trippers.go:577] Response Headers:
	I0116 02:57:32.165266  487926 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 02:57:32.165272  487926 round_trippers.go:580]     Content-Length: 261
	I0116 02:57:32.165277  487926 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:32 GMT
	I0116 02:57:32.165282  487926 round_trippers.go:580]     Audit-Id: c853fced-2813-4269-9158-dea6e67b26a1
	I0116 02:57:32.165287  487926 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:32.165292  487926 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:32.165297  487926 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 02:57:32.165333  487926 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"440"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"0b22b347-fa63-4799-be1d-4dd899f85a07","resourceVersion":"341","creationTimestamp":"2024-01-16T02:57:23Z"}}]}
	I0116 02:57:32.165587  487926 default_sa.go:45] found service account: "default"
	I0116 02:57:32.165611  487926 default_sa.go:55] duration metric: took 189.08795ms for default service account to be created ...
	I0116 02:57:32.165621  487926 system_pods.go:116] waiting for k8s-apps to be running ...
	I0116 02:57:32.362163  487926 request.go:629] Waited for 196.439752ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods
	I0116 02:57:32.362246  487926 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods
	I0116 02:57:32.362252  487926 round_trippers.go:469] Request Headers:
	I0116 02:57:32.362260  487926 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:32.362276  487926 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:32.366468  487926 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 02:57:32.366499  487926 round_trippers.go:577] Response Headers:
	I0116 02:57:32.366515  487926 round_trippers.go:580]     Audit-Id: 110cafee-5aca-4f42-ac66-d00117599019
	I0116 02:57:32.366524  487926 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:32.366532  487926 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:32.366540  487926 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 02:57:32.366548  487926 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 02:57:32.366557  487926 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:32 GMT
	I0116 02:57:32.367156  487926 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"440"},"items":[{"metadata":{"name":"coredns-5dd5756b68-vwqvk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"096151e2-c59c-4dcf-bd29-2029901902c9","resourceVersion":"434","creationTimestamp":"2024-01-16T02:57:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c3967b3a-de7c-4ccd-af3a-dd2a9e8b71f8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c3967b3a-de7c-4ccd-af3a-dd2a9e8b71f8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53956 chars]
	I0116 02:57:32.368909  487926 system_pods.go:86] 8 kube-system pods found
	I0116 02:57:32.368933  487926 system_pods.go:89] "coredns-5dd5756b68-vwqvk" [096151e2-c59c-4dcf-bd29-2029901902c9] Running
	I0116 02:57:32.368938  487926 system_pods.go:89] "etcd-multinode-405494" [3f839da7-c0c0-4546-8848-1557cbf50722] Running
	I0116 02:57:32.368942  487926 system_pods.go:89] "kindnet-8t86n" [4d421823-26dd-467d-94d4-28387c8e3793] Running
	I0116 02:57:32.368947  487926 system_pods.go:89] "kube-apiserver-multinode-405494" [e242d3cf-6cf7-4b47-8d3e-a83e484554a1] Running
	I0116 02:57:32.368952  487926 system_pods.go:89] "kube-controller-manager-multinode-405494" [0833b412-8909-4660-8e16-19701683358e] Running
	I0116 02:57:32.368956  487926 system_pods.go:89] "kube-proxy-gg8kv" [32841b88-1b06-46ed-b4ce-f73301ec0a85] Running
	I0116 02:57:32.368960  487926 system_pods.go:89] "kube-scheduler-multinode-405494" [70c980cb-4ff9-45f5-960f-d8afa355229c] Running
	I0116 02:57:32.368967  487926 system_pods.go:89] "storage-provisioner" [c6f12cfa-46b3-4840-a7e2-258c063a19c2] Running
	I0116 02:57:32.368973  487926 system_pods.go:126] duration metric: took 203.345834ms to wait for k8s-apps to be running ...
	I0116 02:57:32.368980  487926 system_svc.go:44] waiting for kubelet service to be running ....
	I0116 02:57:32.369029  487926 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 02:57:32.385062  487926 system_svc.go:56] duration metric: took 16.069138ms WaitForService to wait for kubelet.
	I0116 02:57:32.385099  487926 kubeadm.go:581] duration metric: took 8.267018176s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0116 02:57:32.385120  487926 node_conditions.go:102] verifying NodePressure condition ...
	I0116 02:57:32.561615  487926 request.go:629] Waited for 176.388651ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/nodes
	I0116 02:57:32.561693  487926 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes
	I0116 02:57:32.561698  487926 round_trippers.go:469] Request Headers:
	I0116 02:57:32.561706  487926 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:57:32.561713  487926 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:57:32.564630  487926 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:57:32.564660  487926 round_trippers.go:577] Response Headers:
	I0116 02:57:32.564671  487926 round_trippers.go:580]     Content-Type: application/json
	I0116 02:57:32.564679  487926 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 02:57:32.564686  487926 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 02:57:32.564695  487926 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:57:32 GMT
	I0116 02:57:32.564702  487926 round_trippers.go:580]     Audit-Id: a34332b3-f5c8-437c-9d48-4ebb94bf1c09
	I0116 02:57:32.564708  487926 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:57:32.565214  487926 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"440"},"items":[{"metadata":{"name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","resourceVersion":"417","creationTimestamp":"2024-01-16T02:57:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_57_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 5951 chars]
	I0116 02:57:32.565622  487926 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 02:57:32.565650  487926 node_conditions.go:123] node cpu capacity is 2
	I0116 02:57:32.565665  487926 node_conditions.go:105] duration metric: took 180.539167ms to run NodePressure ...
	I0116 02:57:32.565680  487926 start.go:228] waiting for startup goroutines ...
	I0116 02:57:32.565689  487926 start.go:233] waiting for cluster config update ...
	I0116 02:57:32.565705  487926 start.go:242] writing updated cluster config ...
	I0116 02:57:32.567991  487926 out.go:177] 
	I0116 02:57:32.569653  487926 config.go:182] Loaded profile config "multinode-405494": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 02:57:32.569758  487926 profile.go:148] Saving config to /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/multinode-405494/config.json ...
	I0116 02:57:32.571664  487926 out.go:177] * Starting worker node multinode-405494-m02 in cluster multinode-405494
	I0116 02:57:32.573322  487926 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 02:57:32.573350  487926 cache.go:56] Caching tarball of preloaded images
	I0116 02:57:32.573464  487926 preload.go:174] Found /home/jenkins/minikube-integration/17965-468241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0116 02:57:32.573476  487926 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0116 02:57:32.573584  487926 profile.go:148] Saving config to /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/multinode-405494/config.json ...
	I0116 02:57:32.573767  487926 start.go:365] acquiring machines lock for multinode-405494-m02: {Name:mk901e5fae8c1a578d989b520053c709bdbdcf06 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0116 02:57:32.573828  487926 start.go:369] acquired machines lock for "multinode-405494-m02" in 35.318µs
	I0116 02:57:32.573852  487926 start.go:93] Provisioning new machine with config: &{Name:multinode-405494 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.4 ClusterName:multinode-405494 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.70 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:t
rue ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0116 02:57:32.573942  487926 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0116 02:57:32.575777  487926 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0116 02:57:32.575889  487926 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:57:32.575925  487926 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:57:32.590535  487926 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34593
	I0116 02:57:32.591045  487926 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:57:32.591549  487926 main.go:141] libmachine: Using API Version  1
	I0116 02:57:32.591570  487926 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:57:32.591900  487926 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:57:32.592099  487926 main.go:141] libmachine: (multinode-405494-m02) Calling .GetMachineName
	I0116 02:57:32.592258  487926 main.go:141] libmachine: (multinode-405494-m02) Calling .DriverName
	I0116 02:57:32.592396  487926 start.go:159] libmachine.API.Create for "multinode-405494" (driver="kvm2")
	I0116 02:57:32.592432  487926 client.go:168] LocalClient.Create starting
	I0116 02:57:32.592466  487926 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem
	I0116 02:57:32.592500  487926 main.go:141] libmachine: Decoding PEM data...
	I0116 02:57:32.592517  487926 main.go:141] libmachine: Parsing certificate...
	I0116 02:57:32.592584  487926 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem
	I0116 02:57:32.592603  487926 main.go:141] libmachine: Decoding PEM data...
	I0116 02:57:32.592615  487926 main.go:141] libmachine: Parsing certificate...
	I0116 02:57:32.592633  487926 main.go:141] libmachine: Running pre-create checks...
	I0116 02:57:32.592642  487926 main.go:141] libmachine: (multinode-405494-m02) Calling .PreCreateCheck
	I0116 02:57:32.592813  487926 main.go:141] libmachine: (multinode-405494-m02) Calling .GetConfigRaw
	I0116 02:57:32.593174  487926 main.go:141] libmachine: Creating machine...
	I0116 02:57:32.593190  487926 main.go:141] libmachine: (multinode-405494-m02) Calling .Create
	I0116 02:57:32.593334  487926 main.go:141] libmachine: (multinode-405494-m02) Creating KVM machine...
	I0116 02:57:32.594711  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | found existing default KVM network
	I0116 02:57:32.594926  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | found existing private KVM network mk-multinode-405494
	I0116 02:57:32.595053  487926 main.go:141] libmachine: (multinode-405494-m02) Setting up store path in /home/jenkins/minikube-integration/17965-468241/.minikube/machines/multinode-405494-m02 ...
	I0116 02:57:32.595104  487926 main.go:141] libmachine: (multinode-405494-m02) Building disk image from file:///home/jenkins/minikube-integration/17965-468241/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso
	I0116 02:57:32.595184  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | I0116 02:57:32.595060  488257 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17965-468241/.minikube
	I0116 02:57:32.595262  487926 main.go:141] libmachine: (multinode-405494-m02) Downloading /home/jenkins/minikube-integration/17965-468241/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17965-468241/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso...
	I0116 02:57:32.824691  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | I0116 02:57:32.824518  488257 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17965-468241/.minikube/machines/multinode-405494-m02/id_rsa...
	I0116 02:57:32.975596  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | I0116 02:57:32.975437  488257 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17965-468241/.minikube/machines/multinode-405494-m02/multinode-405494-m02.rawdisk...
	I0116 02:57:32.975634  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | Writing magic tar header
	I0116 02:57:32.975653  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | Writing SSH key tar header
	I0116 02:57:32.975666  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | I0116 02:57:32.975565  488257 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17965-468241/.minikube/machines/multinode-405494-m02 ...
	I0116 02:57:32.975736  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17965-468241/.minikube/machines/multinode-405494-m02
	I0116 02:57:32.975771  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17965-468241/.minikube/machines
	I0116 02:57:32.975794  487926 main.go:141] libmachine: (multinode-405494-m02) Setting executable bit set on /home/jenkins/minikube-integration/17965-468241/.minikube/machines/multinode-405494-m02 (perms=drwx------)
	I0116 02:57:32.975817  487926 main.go:141] libmachine: (multinode-405494-m02) Setting executable bit set on /home/jenkins/minikube-integration/17965-468241/.minikube/machines (perms=drwxr-xr-x)
	I0116 02:57:32.975834  487926 main.go:141] libmachine: (multinode-405494-m02) Setting executable bit set on /home/jenkins/minikube-integration/17965-468241/.minikube (perms=drwxr-xr-x)
	I0116 02:57:32.975853  487926 main.go:141] libmachine: (multinode-405494-m02) Setting executable bit set on /home/jenkins/minikube-integration/17965-468241 (perms=drwxrwxr-x)
	I0116 02:57:32.975873  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17965-468241/.minikube
	I0116 02:57:32.975886  487926 main.go:141] libmachine: (multinode-405494-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0116 02:57:32.975903  487926 main.go:141] libmachine: (multinode-405494-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0116 02:57:32.975918  487926 main.go:141] libmachine: (multinode-405494-m02) Creating domain...
	I0116 02:57:32.975935  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17965-468241
	I0116 02:57:32.975947  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0116 02:57:32.975974  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | Checking permissions on dir: /home/jenkins
	I0116 02:57:32.975985  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | Checking permissions on dir: /home
	I0116 02:57:32.975996  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | Skipping /home - not owner
	I0116 02:57:32.976738  487926 main.go:141] libmachine: (multinode-405494-m02) define libvirt domain using xml: 
	I0116 02:57:32.976759  487926 main.go:141] libmachine: (multinode-405494-m02) <domain type='kvm'>
	I0116 02:57:32.976793  487926 main.go:141] libmachine: (multinode-405494-m02)   <name>multinode-405494-m02</name>
	I0116 02:57:32.976821  487926 main.go:141] libmachine: (multinode-405494-m02)   <memory unit='MiB'>2200</memory>
	I0116 02:57:32.976835  487926 main.go:141] libmachine: (multinode-405494-m02)   <vcpu>2</vcpu>
	I0116 02:57:32.976844  487926 main.go:141] libmachine: (multinode-405494-m02)   <features>
	I0116 02:57:32.976852  487926 main.go:141] libmachine: (multinode-405494-m02)     <acpi/>
	I0116 02:57:32.976859  487926 main.go:141] libmachine: (multinode-405494-m02)     <apic/>
	I0116 02:57:32.976866  487926 main.go:141] libmachine: (multinode-405494-m02)     <pae/>
	I0116 02:57:32.976875  487926 main.go:141] libmachine: (multinode-405494-m02)     
	I0116 02:57:32.976887  487926 main.go:141] libmachine: (multinode-405494-m02)   </features>
	I0116 02:57:32.976904  487926 main.go:141] libmachine: (multinode-405494-m02)   <cpu mode='host-passthrough'>
	I0116 02:57:32.976915  487926 main.go:141] libmachine: (multinode-405494-m02)   
	I0116 02:57:32.976922  487926 main.go:141] libmachine: (multinode-405494-m02)   </cpu>
	I0116 02:57:32.976932  487926 main.go:141] libmachine: (multinode-405494-m02)   <os>
	I0116 02:57:32.976938  487926 main.go:141] libmachine: (multinode-405494-m02)     <type>hvm</type>
	I0116 02:57:32.976944  487926 main.go:141] libmachine: (multinode-405494-m02)     <boot dev='cdrom'/>
	I0116 02:57:32.976953  487926 main.go:141] libmachine: (multinode-405494-m02)     <boot dev='hd'/>
	I0116 02:57:32.976964  487926 main.go:141] libmachine: (multinode-405494-m02)     <bootmenu enable='no'/>
	I0116 02:57:32.976978  487926 main.go:141] libmachine: (multinode-405494-m02)   </os>
	I0116 02:57:32.976995  487926 main.go:141] libmachine: (multinode-405494-m02)   <devices>
	I0116 02:57:32.977006  487926 main.go:141] libmachine: (multinode-405494-m02)     <disk type='file' device='cdrom'>
	I0116 02:57:32.977017  487926 main.go:141] libmachine: (multinode-405494-m02)       <source file='/home/jenkins/minikube-integration/17965-468241/.minikube/machines/multinode-405494-m02/boot2docker.iso'/>
	I0116 02:57:32.977025  487926 main.go:141] libmachine: (multinode-405494-m02)       <target dev='hdc' bus='scsi'/>
	I0116 02:57:32.977038  487926 main.go:141] libmachine: (multinode-405494-m02)       <readonly/>
	I0116 02:57:32.977054  487926 main.go:141] libmachine: (multinode-405494-m02)     </disk>
	I0116 02:57:32.977073  487926 main.go:141] libmachine: (multinode-405494-m02)     <disk type='file' device='disk'>
	I0116 02:57:32.977088  487926 main.go:141] libmachine: (multinode-405494-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0116 02:57:32.977106  487926 main.go:141] libmachine: (multinode-405494-m02)       <source file='/home/jenkins/minikube-integration/17965-468241/.minikube/machines/multinode-405494-m02/multinode-405494-m02.rawdisk'/>
	I0116 02:57:32.977120  487926 main.go:141] libmachine: (multinode-405494-m02)       <target dev='hda' bus='virtio'/>
	I0116 02:57:32.977145  487926 main.go:141] libmachine: (multinode-405494-m02)     </disk>
	I0116 02:57:32.977162  487926 main.go:141] libmachine: (multinode-405494-m02)     <interface type='network'>
	I0116 02:57:32.977175  487926 main.go:141] libmachine: (multinode-405494-m02)       <source network='mk-multinode-405494'/>
	I0116 02:57:32.977187  487926 main.go:141] libmachine: (multinode-405494-m02)       <model type='virtio'/>
	I0116 02:57:32.977199  487926 main.go:141] libmachine: (multinode-405494-m02)     </interface>
	I0116 02:57:32.977207  487926 main.go:141] libmachine: (multinode-405494-m02)     <interface type='network'>
	I0116 02:57:32.977216  487926 main.go:141] libmachine: (multinode-405494-m02)       <source network='default'/>
	I0116 02:57:32.977224  487926 main.go:141] libmachine: (multinode-405494-m02)       <model type='virtio'/>
	I0116 02:57:32.977231  487926 main.go:141] libmachine: (multinode-405494-m02)     </interface>
	I0116 02:57:32.977239  487926 main.go:141] libmachine: (multinode-405494-m02)     <serial type='pty'>
	I0116 02:57:32.977245  487926 main.go:141] libmachine: (multinode-405494-m02)       <target port='0'/>
	I0116 02:57:32.977251  487926 main.go:141] libmachine: (multinode-405494-m02)     </serial>
	I0116 02:57:32.977257  487926 main.go:141] libmachine: (multinode-405494-m02)     <console type='pty'>
	I0116 02:57:32.977264  487926 main.go:141] libmachine: (multinode-405494-m02)       <target type='serial' port='0'/>
	I0116 02:57:32.977298  487926 main.go:141] libmachine: (multinode-405494-m02)     </console>
	I0116 02:57:32.977327  487926 main.go:141] libmachine: (multinode-405494-m02)     <rng model='virtio'>
	I0116 02:57:32.977347  487926 main.go:141] libmachine: (multinode-405494-m02)       <backend model='random'>/dev/random</backend>
	I0116 02:57:32.977365  487926 main.go:141] libmachine: (multinode-405494-m02)     </rng>
	I0116 02:57:32.977385  487926 main.go:141] libmachine: (multinode-405494-m02)     
	I0116 02:57:32.977400  487926 main.go:141] libmachine: (multinode-405494-m02)     
	I0116 02:57:32.977412  487926 main.go:141] libmachine: (multinode-405494-m02)   </devices>
	I0116 02:57:32.977423  487926 main.go:141] libmachine: (multinode-405494-m02) </domain>
	I0116 02:57:32.977438  487926 main.go:141] libmachine: (multinode-405494-m02) 
	I0116 02:57:32.984958  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | domain multinode-405494-m02 has defined MAC address 52:54:00:91:c5:10 in network default
	I0116 02:57:32.985792  487926 main.go:141] libmachine: (multinode-405494-m02) Ensuring networks are active...
	I0116 02:57:32.985812  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | domain multinode-405494-m02 has defined MAC address 52:54:00:3c:08:8b in network mk-multinode-405494
	I0116 02:57:32.986626  487926 main.go:141] libmachine: (multinode-405494-m02) Ensuring network default is active
	I0116 02:57:32.987055  487926 main.go:141] libmachine: (multinode-405494-m02) Ensuring network mk-multinode-405494 is active
	I0116 02:57:32.987456  487926 main.go:141] libmachine: (multinode-405494-m02) Getting domain xml...
	I0116 02:57:32.988309  487926 main.go:141] libmachine: (multinode-405494-m02) Creating domain...
	I0116 02:57:33.311307  487926 main.go:141] libmachine: (multinode-405494-m02) Waiting to get IP...
	I0116 02:57:33.312013  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | domain multinode-405494-m02 has defined MAC address 52:54:00:3c:08:8b in network mk-multinode-405494
	I0116 02:57:33.312356  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | unable to find current IP address of domain multinode-405494-m02 in network mk-multinode-405494
	I0116 02:57:33.312401  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | I0116 02:57:33.312345  488257 retry.go:31] will retry after 195.379006ms: waiting for machine to come up
	I0116 02:57:33.509791  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | domain multinode-405494-m02 has defined MAC address 52:54:00:3c:08:8b in network mk-multinode-405494
	I0116 02:57:33.510264  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | unable to find current IP address of domain multinode-405494-m02 in network mk-multinode-405494
	I0116 02:57:33.510298  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | I0116 02:57:33.510216  488257 retry.go:31] will retry after 293.690664ms: waiting for machine to come up
	I0116 02:57:33.805858  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | domain multinode-405494-m02 has defined MAC address 52:54:00:3c:08:8b in network mk-multinode-405494
	I0116 02:57:33.806188  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | unable to find current IP address of domain multinode-405494-m02 in network mk-multinode-405494
	I0116 02:57:33.806217  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | I0116 02:57:33.806148  488257 retry.go:31] will retry after 321.109954ms: waiting for machine to come up
	I0116 02:57:34.130106  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | domain multinode-405494-m02 has defined MAC address 52:54:00:3c:08:8b in network mk-multinode-405494
	I0116 02:57:34.130631  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | unable to find current IP address of domain multinode-405494-m02 in network mk-multinode-405494
	I0116 02:57:34.130665  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | I0116 02:57:34.130559  488257 retry.go:31] will retry after 375.515948ms: waiting for machine to come up
	I0116 02:57:34.508114  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | domain multinode-405494-m02 has defined MAC address 52:54:00:3c:08:8b in network mk-multinode-405494
	I0116 02:57:34.508602  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | unable to find current IP address of domain multinode-405494-m02 in network mk-multinode-405494
	I0116 02:57:34.508657  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | I0116 02:57:34.508538  488257 retry.go:31] will retry after 538.963732ms: waiting for machine to come up
	I0116 02:57:35.049345  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | domain multinode-405494-m02 has defined MAC address 52:54:00:3c:08:8b in network mk-multinode-405494
	I0116 02:57:35.049856  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | unable to find current IP address of domain multinode-405494-m02 in network mk-multinode-405494
	I0116 02:57:35.049888  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | I0116 02:57:35.049801  488257 retry.go:31] will retry after 894.845132ms: waiting for machine to come up
	I0116 02:57:35.945874  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | domain multinode-405494-m02 has defined MAC address 52:54:00:3c:08:8b in network mk-multinode-405494
	I0116 02:57:35.946279  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | unable to find current IP address of domain multinode-405494-m02 in network mk-multinode-405494
	I0116 02:57:35.946308  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | I0116 02:57:35.946231  488257 retry.go:31] will retry after 896.646951ms: waiting for machine to come up
	I0116 02:57:36.844431  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | domain multinode-405494-m02 has defined MAC address 52:54:00:3c:08:8b in network mk-multinode-405494
	I0116 02:57:36.844989  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | unable to find current IP address of domain multinode-405494-m02 in network mk-multinode-405494
	I0116 02:57:36.845018  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | I0116 02:57:36.844933  488257 retry.go:31] will retry after 1.171209219s: waiting for machine to come up
	I0116 02:57:38.017580  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | domain multinode-405494-m02 has defined MAC address 52:54:00:3c:08:8b in network mk-multinode-405494
	I0116 02:57:38.018015  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | unable to find current IP address of domain multinode-405494-m02 in network mk-multinode-405494
	I0116 02:57:38.018046  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | I0116 02:57:38.017960  488257 retry.go:31] will retry after 1.248475487s: waiting for machine to come up
	I0116 02:57:39.268785  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | domain multinode-405494-m02 has defined MAC address 52:54:00:3c:08:8b in network mk-multinode-405494
	I0116 02:57:39.269314  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | unable to find current IP address of domain multinode-405494-m02 in network mk-multinode-405494
	I0116 02:57:39.269348  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | I0116 02:57:39.269257  488257 retry.go:31] will retry after 1.717172766s: waiting for machine to come up
	I0116 02:57:40.989192  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | domain multinode-405494-m02 has defined MAC address 52:54:00:3c:08:8b in network mk-multinode-405494
	I0116 02:57:40.989656  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | unable to find current IP address of domain multinode-405494-m02 in network mk-multinode-405494
	I0116 02:57:40.989685  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | I0116 02:57:40.989604  488257 retry.go:31] will retry after 2.63169784s: waiting for machine to come up
	I0116 02:57:43.622676  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | domain multinode-405494-m02 has defined MAC address 52:54:00:3c:08:8b in network mk-multinode-405494
	I0116 02:57:43.623155  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | unable to find current IP address of domain multinode-405494-m02 in network mk-multinode-405494
	I0116 02:57:43.623180  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | I0116 02:57:43.623085  488257 retry.go:31] will retry after 2.550696734s: waiting for machine to come up
	I0116 02:57:46.176767  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | domain multinode-405494-m02 has defined MAC address 52:54:00:3c:08:8b in network mk-multinode-405494
	I0116 02:57:46.177162  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | unable to find current IP address of domain multinode-405494-m02 in network mk-multinode-405494
	I0116 02:57:46.177202  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | I0116 02:57:46.177118  488257 retry.go:31] will retry after 4.032886366s: waiting for machine to come up
	I0116 02:57:50.213595  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | domain multinode-405494-m02 has defined MAC address 52:54:00:3c:08:8b in network mk-multinode-405494
	I0116 02:57:50.214039  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | unable to find current IP address of domain multinode-405494-m02 in network mk-multinode-405494
	I0116 02:57:50.214072  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | I0116 02:57:50.213987  488257 retry.go:31] will retry after 4.801141977s: waiting for machine to come up
	I0116 02:57:55.017198  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | domain multinode-405494-m02 has defined MAC address 52:54:00:3c:08:8b in network mk-multinode-405494
	I0116 02:57:55.017665  487926 main.go:141] libmachine: (multinode-405494-m02) Found IP for machine: 192.168.39.32
	I0116 02:57:55.017703  487926 main.go:141] libmachine: (multinode-405494-m02) Reserving static IP address...
	I0116 02:57:55.017716  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | domain multinode-405494-m02 has current primary IP address 192.168.39.32 and MAC address 52:54:00:3c:08:8b in network mk-multinode-405494
	I0116 02:57:55.018121  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | unable to find host DHCP lease matching {name: "multinode-405494-m02", mac: "52:54:00:3c:08:8b", ip: "192.168.39.32"} in network mk-multinode-405494
	I0116 02:57:55.095381  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | Getting to WaitForSSH function...
	I0116 02:57:55.095417  487926 main.go:141] libmachine: (multinode-405494-m02) Reserved static IP address: 192.168.39.32
	I0116 02:57:55.095431  487926 main.go:141] libmachine: (multinode-405494-m02) Waiting for SSH to be available...
	I0116 02:57:55.098156  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | domain multinode-405494-m02 has defined MAC address 52:54:00:3c:08:8b in network mk-multinode-405494
	I0116 02:57:55.098531  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:08:8b", ip: ""} in network mk-multinode-405494: {Iface:virbr1 ExpiryTime:2024-01-16 03:57:47 +0000 UTC Type:0 Mac:52:54:00:3c:08:8b Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:minikube Clientid:01:52:54:00:3c:08:8b}
	I0116 02:57:55.098557  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | domain multinode-405494-m02 has defined IP address 192.168.39.32 and MAC address 52:54:00:3c:08:8b in network mk-multinode-405494
	I0116 02:57:55.098695  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | Using SSH client type: external
	I0116 02:57:55.098725  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/17965-468241/.minikube/machines/multinode-405494-m02/id_rsa (-rw-------)
	I0116 02:57:55.098767  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.32 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17965-468241/.minikube/machines/multinode-405494-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0116 02:57:55.098781  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | About to run SSH command:
	I0116 02:57:55.098796  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | exit 0
	I0116 02:57:55.189349  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | SSH cmd err, output: <nil>: 
	I0116 02:57:55.189604  487926 main.go:141] libmachine: (multinode-405494-m02) KVM machine creation complete!
	I0116 02:57:55.189957  487926 main.go:141] libmachine: (multinode-405494-m02) Calling .GetConfigRaw
	I0116 02:57:55.190498  487926 main.go:141] libmachine: (multinode-405494-m02) Calling .DriverName
	I0116 02:57:55.190675  487926 main.go:141] libmachine: (multinode-405494-m02) Calling .DriverName
	I0116 02:57:55.190839  487926 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0116 02:57:55.190856  487926 main.go:141] libmachine: (multinode-405494-m02) Calling .GetState
	I0116 02:57:55.191977  487926 main.go:141] libmachine: Detecting operating system of created instance...
	I0116 02:57:55.192002  487926 main.go:141] libmachine: Waiting for SSH to be available...
	I0116 02:57:55.192012  487926 main.go:141] libmachine: Getting to WaitForSSH function...
	I0116 02:57:55.192023  487926 main.go:141] libmachine: (multinode-405494-m02) Calling .GetSSHHostname
	I0116 02:57:55.194247  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | domain multinode-405494-m02 has defined MAC address 52:54:00:3c:08:8b in network mk-multinode-405494
	I0116 02:57:55.194617  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:08:8b", ip: ""} in network mk-multinode-405494: {Iface:virbr1 ExpiryTime:2024-01-16 03:57:47 +0000 UTC Type:0 Mac:52:54:00:3c:08:8b Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-405494-m02 Clientid:01:52:54:00:3c:08:8b}
	I0116 02:57:55.194648  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | domain multinode-405494-m02 has defined IP address 192.168.39.32 and MAC address 52:54:00:3c:08:8b in network mk-multinode-405494
	I0116 02:57:55.194876  487926 main.go:141] libmachine: (multinode-405494-m02) Calling .GetSSHPort
	I0116 02:57:55.195056  487926 main.go:141] libmachine: (multinode-405494-m02) Calling .GetSSHKeyPath
	I0116 02:57:55.195251  487926 main.go:141] libmachine: (multinode-405494-m02) Calling .GetSSHKeyPath
	I0116 02:57:55.195448  487926 main.go:141] libmachine: (multinode-405494-m02) Calling .GetSSHUsername
	I0116 02:57:55.195661  487926 main.go:141] libmachine: Using SSH client type: native
	I0116 02:57:55.196094  487926 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0116 02:57:55.196112  487926 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0116 02:57:55.307432  487926 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 02:57:55.307461  487926 main.go:141] libmachine: Detecting the provisioner...
	I0116 02:57:55.307475  487926 main.go:141] libmachine: (multinode-405494-m02) Calling .GetSSHHostname
	I0116 02:57:55.310470  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | domain multinode-405494-m02 has defined MAC address 52:54:00:3c:08:8b in network mk-multinode-405494
	I0116 02:57:55.310796  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:08:8b", ip: ""} in network mk-multinode-405494: {Iface:virbr1 ExpiryTime:2024-01-16 03:57:47 +0000 UTC Type:0 Mac:52:54:00:3c:08:8b Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-405494-m02 Clientid:01:52:54:00:3c:08:8b}
	I0116 02:57:55.310834  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | domain multinode-405494-m02 has defined IP address 192.168.39.32 and MAC address 52:54:00:3c:08:8b in network mk-multinode-405494
	I0116 02:57:55.310946  487926 main.go:141] libmachine: (multinode-405494-m02) Calling .GetSSHPort
	I0116 02:57:55.311173  487926 main.go:141] libmachine: (multinode-405494-m02) Calling .GetSSHKeyPath
	I0116 02:57:55.311382  487926 main.go:141] libmachine: (multinode-405494-m02) Calling .GetSSHKeyPath
	I0116 02:57:55.311517  487926 main.go:141] libmachine: (multinode-405494-m02) Calling .GetSSHUsername
	I0116 02:57:55.311697  487926 main.go:141] libmachine: Using SSH client type: native
	I0116 02:57:55.312021  487926 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0116 02:57:55.312043  487926 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0116 02:57:55.429371  487926 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g19d536a-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0116 02:57:55.429432  487926 main.go:141] libmachine: found compatible host: buildroot
	I0116 02:57:55.429439  487926 main.go:141] libmachine: Provisioning with buildroot...
	I0116 02:57:55.429448  487926 main.go:141] libmachine: (multinode-405494-m02) Calling .GetMachineName
	I0116 02:57:55.429883  487926 buildroot.go:166] provisioning hostname "multinode-405494-m02"
	I0116 02:57:55.429950  487926 main.go:141] libmachine: (multinode-405494-m02) Calling .GetMachineName
	I0116 02:57:55.430180  487926 main.go:141] libmachine: (multinode-405494-m02) Calling .GetSSHHostname
	I0116 02:57:55.433106  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | domain multinode-405494-m02 has defined MAC address 52:54:00:3c:08:8b in network mk-multinode-405494
	I0116 02:57:55.433500  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:08:8b", ip: ""} in network mk-multinode-405494: {Iface:virbr1 ExpiryTime:2024-01-16 03:57:47 +0000 UTC Type:0 Mac:52:54:00:3c:08:8b Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-405494-m02 Clientid:01:52:54:00:3c:08:8b}
	I0116 02:57:55.433534  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | domain multinode-405494-m02 has defined IP address 192.168.39.32 and MAC address 52:54:00:3c:08:8b in network mk-multinode-405494
	I0116 02:57:55.433652  487926 main.go:141] libmachine: (multinode-405494-m02) Calling .GetSSHPort
	I0116 02:57:55.433893  487926 main.go:141] libmachine: (multinode-405494-m02) Calling .GetSSHKeyPath
	I0116 02:57:55.434059  487926 main.go:141] libmachine: (multinode-405494-m02) Calling .GetSSHKeyPath
	I0116 02:57:55.434224  487926 main.go:141] libmachine: (multinode-405494-m02) Calling .GetSSHUsername
	I0116 02:57:55.434404  487926 main.go:141] libmachine: Using SSH client type: native
	I0116 02:57:55.434784  487926 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0116 02:57:55.434804  487926 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-405494-m02 && echo "multinode-405494-m02" | sudo tee /etc/hostname
	I0116 02:57:55.563344  487926 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-405494-m02
	
	I0116 02:57:55.563386  487926 main.go:141] libmachine: (multinode-405494-m02) Calling .GetSSHHostname
	I0116 02:57:55.566250  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | domain multinode-405494-m02 has defined MAC address 52:54:00:3c:08:8b in network mk-multinode-405494
	I0116 02:57:55.566607  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:08:8b", ip: ""} in network mk-multinode-405494: {Iface:virbr1 ExpiryTime:2024-01-16 03:57:47 +0000 UTC Type:0 Mac:52:54:00:3c:08:8b Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-405494-m02 Clientid:01:52:54:00:3c:08:8b}
	I0116 02:57:55.566634  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | domain multinode-405494-m02 has defined IP address 192.168.39.32 and MAC address 52:54:00:3c:08:8b in network mk-multinode-405494
	I0116 02:57:55.566886  487926 main.go:141] libmachine: (multinode-405494-m02) Calling .GetSSHPort
	I0116 02:57:55.567110  487926 main.go:141] libmachine: (multinode-405494-m02) Calling .GetSSHKeyPath
	I0116 02:57:55.567356  487926 main.go:141] libmachine: (multinode-405494-m02) Calling .GetSSHKeyPath
	I0116 02:57:55.567524  487926 main.go:141] libmachine: (multinode-405494-m02) Calling .GetSSHUsername
	I0116 02:57:55.567682  487926 main.go:141] libmachine: Using SSH client type: native
	I0116 02:57:55.568002  487926 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0116 02:57:55.568063  487926 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-405494-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-405494-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-405494-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 02:57:55.692961  487926 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 02:57:55.693013  487926 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17965-468241/.minikube CaCertPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17965-468241/.minikube}
	I0116 02:57:55.693030  487926 buildroot.go:174] setting up certificates
	I0116 02:57:55.693040  487926 provision.go:83] configureAuth start
	I0116 02:57:55.693050  487926 main.go:141] libmachine: (multinode-405494-m02) Calling .GetMachineName
	I0116 02:57:55.693431  487926 main.go:141] libmachine: (multinode-405494-m02) Calling .GetIP
	I0116 02:57:55.696064  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | domain multinode-405494-m02 has defined MAC address 52:54:00:3c:08:8b in network mk-multinode-405494
	I0116 02:57:55.696394  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:08:8b", ip: ""} in network mk-multinode-405494: {Iface:virbr1 ExpiryTime:2024-01-16 03:57:47 +0000 UTC Type:0 Mac:52:54:00:3c:08:8b Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-405494-m02 Clientid:01:52:54:00:3c:08:8b}
	I0116 02:57:55.696428  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | domain multinode-405494-m02 has defined IP address 192.168.39.32 and MAC address 52:54:00:3c:08:8b in network mk-multinode-405494
	I0116 02:57:55.696590  487926 main.go:141] libmachine: (multinode-405494-m02) Calling .GetSSHHostname
	I0116 02:57:55.698879  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | domain multinode-405494-m02 has defined MAC address 52:54:00:3c:08:8b in network mk-multinode-405494
	I0116 02:57:55.699291  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:08:8b", ip: ""} in network mk-multinode-405494: {Iface:virbr1 ExpiryTime:2024-01-16 03:57:47 +0000 UTC Type:0 Mac:52:54:00:3c:08:8b Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-405494-m02 Clientid:01:52:54:00:3c:08:8b}
	I0116 02:57:55.699319  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | domain multinode-405494-m02 has defined IP address 192.168.39.32 and MAC address 52:54:00:3c:08:8b in network mk-multinode-405494
	I0116 02:57:55.699412  487926 provision.go:138] copyHostCerts
	I0116 02:57:55.699439  487926 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem
	I0116 02:57:55.699473  487926 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem, removing ...
	I0116 02:57:55.699484  487926 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem
	I0116 02:57:55.699552  487926 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem (1078 bytes)
	I0116 02:57:55.699633  487926 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem
	I0116 02:57:55.699650  487926 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem, removing ...
	I0116 02:57:55.699657  487926 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem
	I0116 02:57:55.699680  487926 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem (1123 bytes)
	I0116 02:57:55.699747  487926 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem
	I0116 02:57:55.699770  487926 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem, removing ...
	I0116 02:57:55.699775  487926 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem
	I0116 02:57:55.699812  487926 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem (1679 bytes)
	I0116 02:57:55.699885  487926 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca-key.pem org=jenkins.multinode-405494-m02 san=[192.168.39.32 192.168.39.32 localhost 127.0.0.1 minikube multinode-405494-m02]
	I0116 02:57:55.768597  487926 provision.go:172] copyRemoteCerts
	I0116 02:57:55.768694  487926 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 02:57:55.768731  487926 main.go:141] libmachine: (multinode-405494-m02) Calling .GetSSHHostname
	I0116 02:57:55.771242  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | domain multinode-405494-m02 has defined MAC address 52:54:00:3c:08:8b in network mk-multinode-405494
	I0116 02:57:55.771602  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:08:8b", ip: ""} in network mk-multinode-405494: {Iface:virbr1 ExpiryTime:2024-01-16 03:57:47 +0000 UTC Type:0 Mac:52:54:00:3c:08:8b Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-405494-m02 Clientid:01:52:54:00:3c:08:8b}
	I0116 02:57:55.771633  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | domain multinode-405494-m02 has defined IP address 192.168.39.32 and MAC address 52:54:00:3c:08:8b in network mk-multinode-405494
	I0116 02:57:55.771826  487926 main.go:141] libmachine: (multinode-405494-m02) Calling .GetSSHPort
	I0116 02:57:55.772103  487926 main.go:141] libmachine: (multinode-405494-m02) Calling .GetSSHKeyPath
	I0116 02:57:55.772289  487926 main.go:141] libmachine: (multinode-405494-m02) Calling .GetSSHUsername
	I0116 02:57:55.772457  487926 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/multinode-405494-m02/id_rsa Username:docker}
	I0116 02:57:55.858930  487926 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0116 02:57:55.859022  487926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0116 02:57:55.882387  487926 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0116 02:57:55.882471  487926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0116 02:57:55.905894  487926 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0116 02:57:55.905987  487926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0116 02:57:55.928725  487926 provision.go:86] duration metric: configureAuth took 235.67195ms
	I0116 02:57:55.928762  487926 buildroot.go:189] setting minikube options for container-runtime
	I0116 02:57:55.928981  487926 config.go:182] Loaded profile config "multinode-405494": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 02:57:55.929070  487926 main.go:141] libmachine: (multinode-405494-m02) Calling .GetSSHHostname
	I0116 02:57:55.931430  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | domain multinode-405494-m02 has defined MAC address 52:54:00:3c:08:8b in network mk-multinode-405494
	I0116 02:57:55.931820  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:08:8b", ip: ""} in network mk-multinode-405494: {Iface:virbr1 ExpiryTime:2024-01-16 03:57:47 +0000 UTC Type:0 Mac:52:54:00:3c:08:8b Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-405494-m02 Clientid:01:52:54:00:3c:08:8b}
	I0116 02:57:55.931861  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | domain multinode-405494-m02 has defined IP address 192.168.39.32 and MAC address 52:54:00:3c:08:8b in network mk-multinode-405494
	I0116 02:57:55.932074  487926 main.go:141] libmachine: (multinode-405494-m02) Calling .GetSSHPort
	I0116 02:57:55.932300  487926 main.go:141] libmachine: (multinode-405494-m02) Calling .GetSSHKeyPath
	I0116 02:57:55.932493  487926 main.go:141] libmachine: (multinode-405494-m02) Calling .GetSSHKeyPath
	I0116 02:57:55.932635  487926 main.go:141] libmachine: (multinode-405494-m02) Calling .GetSSHUsername
	I0116 02:57:55.932797  487926 main.go:141] libmachine: Using SSH client type: native
	I0116 02:57:55.933112  487926 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0116 02:57:55.933129  487926 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 02:57:56.251197  487926 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 02:57:56.251232  487926 main.go:141] libmachine: Checking connection to Docker...
	I0116 02:57:56.251246  487926 main.go:141] libmachine: (multinode-405494-m02) Calling .GetURL
	I0116 02:57:56.252614  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | Using libvirt version 6000000
	I0116 02:57:56.255344  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | domain multinode-405494-m02 has defined MAC address 52:54:00:3c:08:8b in network mk-multinode-405494
	I0116 02:57:56.255857  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:08:8b", ip: ""} in network mk-multinode-405494: {Iface:virbr1 ExpiryTime:2024-01-16 03:57:47 +0000 UTC Type:0 Mac:52:54:00:3c:08:8b Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-405494-m02 Clientid:01:52:54:00:3c:08:8b}
	I0116 02:57:56.255891  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | domain multinode-405494-m02 has defined IP address 192.168.39.32 and MAC address 52:54:00:3c:08:8b in network mk-multinode-405494
	I0116 02:57:56.256111  487926 main.go:141] libmachine: Docker is up and running!
	I0116 02:57:56.256126  487926 main.go:141] libmachine: Reticulating splines...
	I0116 02:57:56.256133  487926 client.go:171] LocalClient.Create took 23.663691402s
	I0116 02:57:56.256156  487926 start.go:167] duration metric: libmachine.API.Create for "multinode-405494" took 23.663760615s
	I0116 02:57:56.256169  487926 start.go:300] post-start starting for "multinode-405494-m02" (driver="kvm2")
	I0116 02:57:56.256184  487926 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 02:57:56.256208  487926 main.go:141] libmachine: (multinode-405494-m02) Calling .DriverName
	I0116 02:57:56.256493  487926 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 02:57:56.256526  487926 main.go:141] libmachine: (multinode-405494-m02) Calling .GetSSHHostname
	I0116 02:57:56.258807  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | domain multinode-405494-m02 has defined MAC address 52:54:00:3c:08:8b in network mk-multinode-405494
	I0116 02:57:56.259149  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:08:8b", ip: ""} in network mk-multinode-405494: {Iface:virbr1 ExpiryTime:2024-01-16 03:57:47 +0000 UTC Type:0 Mac:52:54:00:3c:08:8b Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-405494-m02 Clientid:01:52:54:00:3c:08:8b}
	I0116 02:57:56.259182  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | domain multinode-405494-m02 has defined IP address 192.168.39.32 and MAC address 52:54:00:3c:08:8b in network mk-multinode-405494
	I0116 02:57:56.259320  487926 main.go:141] libmachine: (multinode-405494-m02) Calling .GetSSHPort
	I0116 02:57:56.259518  487926 main.go:141] libmachine: (multinode-405494-m02) Calling .GetSSHKeyPath
	I0116 02:57:56.259683  487926 main.go:141] libmachine: (multinode-405494-m02) Calling .GetSSHUsername
	I0116 02:57:56.259842  487926 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/multinode-405494-m02/id_rsa Username:docker}
	I0116 02:57:56.345223  487926 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 02:57:56.349727  487926 command_runner.go:130] > NAME=Buildroot
	I0116 02:57:56.349750  487926 command_runner.go:130] > VERSION=2021.02.12-1-g19d536a-dirty
	I0116 02:57:56.349756  487926 command_runner.go:130] > ID=buildroot
	I0116 02:57:56.349764  487926 command_runner.go:130] > VERSION_ID=2021.02.12
	I0116 02:57:56.349771  487926 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0116 02:57:56.349805  487926 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 02:57:56.349823  487926 filesync.go:126] Scanning /home/jenkins/minikube-integration/17965-468241/.minikube/addons for local assets ...
	I0116 02:57:56.349896  487926 filesync.go:126] Scanning /home/jenkins/minikube-integration/17965-468241/.minikube/files for local assets ...
	I0116 02:57:56.349989  487926 filesync.go:149] local asset: /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem -> 4754782.pem in /etc/ssl/certs
	I0116 02:57:56.350005  487926 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem -> /etc/ssl/certs/4754782.pem
	I0116 02:57:56.350114  487926 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 02:57:56.358352  487926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem --> /etc/ssl/certs/4754782.pem (1708 bytes)
	I0116 02:57:56.382441  487926 start.go:303] post-start completed in 126.251605ms
	I0116 02:57:56.382526  487926 main.go:141] libmachine: (multinode-405494-m02) Calling .GetConfigRaw
	I0116 02:57:56.383366  487926 main.go:141] libmachine: (multinode-405494-m02) Calling .GetIP
	I0116 02:57:56.385932  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | domain multinode-405494-m02 has defined MAC address 52:54:00:3c:08:8b in network mk-multinode-405494
	I0116 02:57:56.386402  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:08:8b", ip: ""} in network mk-multinode-405494: {Iface:virbr1 ExpiryTime:2024-01-16 03:57:47 +0000 UTC Type:0 Mac:52:54:00:3c:08:8b Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-405494-m02 Clientid:01:52:54:00:3c:08:8b}
	I0116 02:57:56.386435  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | domain multinode-405494-m02 has defined IP address 192.168.39.32 and MAC address 52:54:00:3c:08:8b in network mk-multinode-405494
	I0116 02:57:56.386769  487926 profile.go:148] Saving config to /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/multinode-405494/config.json ...
	I0116 02:57:56.386957  487926 start.go:128] duration metric: createHost completed in 23.81299933s
	I0116 02:57:56.386985  487926 main.go:141] libmachine: (multinode-405494-m02) Calling .GetSSHHostname
	I0116 02:57:56.389550  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | domain multinode-405494-m02 has defined MAC address 52:54:00:3c:08:8b in network mk-multinode-405494
	I0116 02:57:56.390019  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:08:8b", ip: ""} in network mk-multinode-405494: {Iface:virbr1 ExpiryTime:2024-01-16 03:57:47 +0000 UTC Type:0 Mac:52:54:00:3c:08:8b Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-405494-m02 Clientid:01:52:54:00:3c:08:8b}
	I0116 02:57:56.390057  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | domain multinode-405494-m02 has defined IP address 192.168.39.32 and MAC address 52:54:00:3c:08:8b in network mk-multinode-405494
	I0116 02:57:56.390170  487926 main.go:141] libmachine: (multinode-405494-m02) Calling .GetSSHPort
	I0116 02:57:56.390384  487926 main.go:141] libmachine: (multinode-405494-m02) Calling .GetSSHKeyPath
	I0116 02:57:56.390567  487926 main.go:141] libmachine: (multinode-405494-m02) Calling .GetSSHKeyPath
	I0116 02:57:56.390753  487926 main.go:141] libmachine: (multinode-405494-m02) Calling .GetSSHUsername
	I0116 02:57:56.390927  487926 main.go:141] libmachine: Using SSH client type: native
	I0116 02:57:56.391272  487926 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0116 02:57:56.391287  487926 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0116 02:57:56.505092  487926 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705373876.486886852
	
	I0116 02:57:56.505119  487926 fix.go:206] guest clock: 1705373876.486886852
	I0116 02:57:56.505128  487926 fix.go:219] Guest: 2024-01-16 02:57:56.486886852 +0000 UTC Remote: 2024-01-16 02:57:56.386971245 +0000 UTC m=+90.192048002 (delta=99.915607ms)
	I0116 02:57:56.505145  487926 fix.go:190] guest clock delta is within tolerance: 99.915607ms
	I0116 02:57:56.505150  487926 start.go:83] releasing machines lock for "multinode-405494-m02", held for 23.931311089s
	I0116 02:57:56.505169  487926 main.go:141] libmachine: (multinode-405494-m02) Calling .DriverName
	I0116 02:57:56.505532  487926 main.go:141] libmachine: (multinode-405494-m02) Calling .GetIP
	I0116 02:57:56.508378  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | domain multinode-405494-m02 has defined MAC address 52:54:00:3c:08:8b in network mk-multinode-405494
	I0116 02:57:56.508726  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:08:8b", ip: ""} in network mk-multinode-405494: {Iface:virbr1 ExpiryTime:2024-01-16 03:57:47 +0000 UTC Type:0 Mac:52:54:00:3c:08:8b Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-405494-m02 Clientid:01:52:54:00:3c:08:8b}
	I0116 02:57:56.508761  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | domain multinode-405494-m02 has defined IP address 192.168.39.32 and MAC address 52:54:00:3c:08:8b in network mk-multinode-405494
	I0116 02:57:56.511426  487926 out.go:177] * Found network options:
	I0116 02:57:56.513125  487926 out.go:177]   - NO_PROXY=192.168.39.70
	W0116 02:57:56.514528  487926 proxy.go:119] fail to check proxy env: Error ip not in block
	I0116 02:57:56.514585  487926 main.go:141] libmachine: (multinode-405494-m02) Calling .DriverName
	I0116 02:57:56.515338  487926 main.go:141] libmachine: (multinode-405494-m02) Calling .DriverName
	I0116 02:57:56.515599  487926 main.go:141] libmachine: (multinode-405494-m02) Calling .DriverName
	I0116 02:57:56.515703  487926 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 02:57:56.515778  487926 main.go:141] libmachine: (multinode-405494-m02) Calling .GetSSHHostname
	W0116 02:57:56.515828  487926 proxy.go:119] fail to check proxy env: Error ip not in block
	I0116 02:57:56.515923  487926 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 02:57:56.515955  487926 main.go:141] libmachine: (multinode-405494-m02) Calling .GetSSHHostname
	I0116 02:57:56.518911  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | domain multinode-405494-m02 has defined MAC address 52:54:00:3c:08:8b in network mk-multinode-405494
	I0116 02:57:56.518970  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | domain multinode-405494-m02 has defined MAC address 52:54:00:3c:08:8b in network mk-multinode-405494
	I0116 02:57:56.519370  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:08:8b", ip: ""} in network mk-multinode-405494: {Iface:virbr1 ExpiryTime:2024-01-16 03:57:47 +0000 UTC Type:0 Mac:52:54:00:3c:08:8b Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-405494-m02 Clientid:01:52:54:00:3c:08:8b}
	I0116 02:57:56.519407  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:08:8b", ip: ""} in network mk-multinode-405494: {Iface:virbr1 ExpiryTime:2024-01-16 03:57:47 +0000 UTC Type:0 Mac:52:54:00:3c:08:8b Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-405494-m02 Clientid:01:52:54:00:3c:08:8b}
	I0116 02:57:56.519431  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | domain multinode-405494-m02 has defined IP address 192.168.39.32 and MAC address 52:54:00:3c:08:8b in network mk-multinode-405494
	I0116 02:57:56.519452  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | domain multinode-405494-m02 has defined IP address 192.168.39.32 and MAC address 52:54:00:3c:08:8b in network mk-multinode-405494
	I0116 02:57:56.519638  487926 main.go:141] libmachine: (multinode-405494-m02) Calling .GetSSHPort
	I0116 02:57:56.519757  487926 main.go:141] libmachine: (multinode-405494-m02) Calling .GetSSHPort
	I0116 02:57:56.519849  487926 main.go:141] libmachine: (multinode-405494-m02) Calling .GetSSHKeyPath
	I0116 02:57:56.519931  487926 main.go:141] libmachine: (multinode-405494-m02) Calling .GetSSHKeyPath
	I0116 02:57:56.519987  487926 main.go:141] libmachine: (multinode-405494-m02) Calling .GetSSHUsername
	I0116 02:57:56.520102  487926 main.go:141] libmachine: (multinode-405494-m02) Calling .GetSSHUsername
	I0116 02:57:56.520121  487926 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/multinode-405494-m02/id_rsa Username:docker}
	I0116 02:57:56.520324  487926 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/multinode-405494-m02/id_rsa Username:docker}
	I0116 02:57:56.631557  487926 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0116 02:57:56.768628  487926 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0116 02:57:56.774874  487926 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0116 02:57:56.774943  487926 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 02:57:56.775015  487926 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 02:57:56.789919  487926 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0116 02:57:56.790276  487926 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0116 02:57:56.790294  487926 start.go:475] detecting cgroup driver to use...
	I0116 02:57:56.790362  487926 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 02:57:56.806836  487926 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 02:57:56.819450  487926 docker.go:217] disabling cri-docker service (if available) ...
	I0116 02:57:56.819534  487926 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 02:57:56.832089  487926 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 02:57:56.844837  487926 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 02:57:56.860441  487926 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I0116 02:57:56.949313  487926 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 02:57:57.075362  487926 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0116 02:57:57.075420  487926 docker.go:233] disabling docker service ...
	I0116 02:57:57.075481  487926 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 02:57:57.089993  487926 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 02:57:57.102705  487926 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I0116 02:57:57.102816  487926 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 02:57:57.117351  487926 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0116 02:57:57.214154  487926 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 02:57:57.227218  487926 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I0116 02:57:57.227246  487926 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0116 02:57:57.326607  487926 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 02:57:57.340085  487926 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 02:57:57.358585  487926 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0116 02:57:57.358883  487926 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0116 02:57:57.358950  487926 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 02:57:57.368884  487926 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 02:57:57.368956  487926 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 02:57:57.379068  487926 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 02:57:57.388958  487926 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 02:57:57.398819  487926 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 02:57:57.408714  487926 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 02:57:57.416973  487926 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0116 02:57:57.417052  487926 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0116 02:57:57.417121  487926 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0116 02:57:57.431217  487926 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 02:57:57.440176  487926 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 02:57:57.548331  487926 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 02:57:57.717107  487926 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 02:57:57.717219  487926 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 02:57:57.726681  487926 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0116 02:57:57.726708  487926 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0116 02:57:57.726714  487926 command_runner.go:130] > Device: 16h/22d	Inode: 719         Links: 1
	I0116 02:57:57.726721  487926 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0116 02:57:57.726727  487926 command_runner.go:130] > Access: 2024-01-16 02:57:57.687388513 +0000
	I0116 02:57:57.726733  487926 command_runner.go:130] > Modify: 2024-01-16 02:57:57.687388513 +0000
	I0116 02:57:57.726738  487926 command_runner.go:130] > Change: 2024-01-16 02:57:57.687388513 +0000
	I0116 02:57:57.726741  487926 command_runner.go:130] >  Birth: -
	I0116 02:57:57.726849  487926 start.go:543] Will wait 60s for crictl version
	I0116 02:57:57.726916  487926 ssh_runner.go:195] Run: which crictl
	I0116 02:57:57.731034  487926 command_runner.go:130] > /usr/bin/crictl
	I0116 02:57:57.731129  487926 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 02:57:57.777829  487926 command_runner.go:130] > Version:  0.1.0
	I0116 02:57:57.777857  487926 command_runner.go:130] > RuntimeName:  cri-o
	I0116 02:57:57.777862  487926 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0116 02:57:57.777867  487926 command_runner.go:130] > RuntimeApiVersion:  v1
	I0116 02:57:57.777887  487926 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0116 02:57:57.777944  487926 ssh_runner.go:195] Run: crio --version
	I0116 02:57:57.836166  487926 command_runner.go:130] > crio version 1.24.1
	I0116 02:57:57.836198  487926 command_runner.go:130] > Version:          1.24.1
	I0116 02:57:57.836210  487926 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0116 02:57:57.836217  487926 command_runner.go:130] > GitTreeState:     dirty
	I0116 02:57:57.836234  487926 command_runner.go:130] > BuildDate:        2023-12-28T22:46:29Z
	I0116 02:57:57.836243  487926 command_runner.go:130] > GoVersion:        go1.19.9
	I0116 02:57:57.836248  487926 command_runner.go:130] > Compiler:         gc
	I0116 02:57:57.836253  487926 command_runner.go:130] > Platform:         linux/amd64
	I0116 02:57:57.836258  487926 command_runner.go:130] > Linkmode:         dynamic
	I0116 02:57:57.836269  487926 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0116 02:57:57.836274  487926 command_runner.go:130] > SeccompEnabled:   true
	I0116 02:57:57.836278  487926 command_runner.go:130] > AppArmorEnabled:  false
	I0116 02:57:57.837828  487926 ssh_runner.go:195] Run: crio --version
	I0116 02:57:57.892788  487926 command_runner.go:130] > crio version 1.24.1
	I0116 02:57:57.892813  487926 command_runner.go:130] > Version:          1.24.1
	I0116 02:57:57.892827  487926 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0116 02:57:57.892831  487926 command_runner.go:130] > GitTreeState:     dirty
	I0116 02:57:57.892838  487926 command_runner.go:130] > BuildDate:        2023-12-28T22:46:29Z
	I0116 02:57:57.892843  487926 command_runner.go:130] > GoVersion:        go1.19.9
	I0116 02:57:57.892847  487926 command_runner.go:130] > Compiler:         gc
	I0116 02:57:57.892855  487926 command_runner.go:130] > Platform:         linux/amd64
	I0116 02:57:57.892861  487926 command_runner.go:130] > Linkmode:         dynamic
	I0116 02:57:57.892872  487926 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0116 02:57:57.892880  487926 command_runner.go:130] > SeccompEnabled:   true
	I0116 02:57:57.892892  487926 command_runner.go:130] > AppArmorEnabled:  false
	I0116 02:57:57.895412  487926 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0116 02:57:57.897399  487926 out.go:177]   - env NO_PROXY=192.168.39.70
	I0116 02:57:57.899065  487926 main.go:141] libmachine: (multinode-405494-m02) Calling .GetIP
	I0116 02:57:57.902076  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | domain multinode-405494-m02 has defined MAC address 52:54:00:3c:08:8b in network mk-multinode-405494
	I0116 02:57:57.902534  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:08:8b", ip: ""} in network mk-multinode-405494: {Iface:virbr1 ExpiryTime:2024-01-16 03:57:47 +0000 UTC Type:0 Mac:52:54:00:3c:08:8b Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-405494-m02 Clientid:01:52:54:00:3c:08:8b}
	I0116 02:57:57.902560  487926 main.go:141] libmachine: (multinode-405494-m02) DBG | domain multinode-405494-m02 has defined IP address 192.168.39.32 and MAC address 52:54:00:3c:08:8b in network mk-multinode-405494
	I0116 02:57:57.902818  487926 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0116 02:57:57.907748  487926 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 02:57:57.922360  487926 certs.go:56] Setting up /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/multinode-405494 for IP: 192.168.39.32
	I0116 02:57:57.922398  487926 certs.go:190] acquiring lock for shared ca certs: {Name:mkab5aeea3e0ed69f3592dc8f418e07c6075b130 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:57:57.922753  487926 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17965-468241/.minikube/ca.key
	I0116 02:57:57.922808  487926 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.key
	I0116 02:57:57.922828  487926 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0116 02:57:57.922846  487926 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0116 02:57:57.922860  487926 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0116 02:57:57.922876  487926 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0116 02:57:57.922994  487926 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478.pem (1338 bytes)
	W0116 02:57:57.923033  487926 certs.go:433] ignoring /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478_empty.pem, impossibly tiny 0 bytes
	I0116 02:57:57.923045  487926 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca-key.pem (1679 bytes)
	I0116 02:57:57.923074  487926 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem (1078 bytes)
	I0116 02:57:57.923101  487926 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem (1123 bytes)
	I0116 02:57:57.923127  487926 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem (1679 bytes)
	I0116 02:57:57.923173  487926 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem (1708 bytes)
	I0116 02:57:57.923234  487926 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem -> /usr/share/ca-certificates/4754782.pem
	I0116 02:57:57.923258  487926 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0116 02:57:57.923275  487926 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478.pem -> /usr/share/ca-certificates/475478.pem
	I0116 02:57:57.923754  487926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 02:57:57.949650  487926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0116 02:57:57.978027  487926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 02:57:58.006370  487926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 02:57:58.033693  487926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem --> /usr/share/ca-certificates/4754782.pem (1708 bytes)
	I0116 02:57:58.059431  487926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 02:57:58.084904  487926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478.pem --> /usr/share/ca-certificates/475478.pem (1338 bytes)
	I0116 02:57:58.110227  487926 ssh_runner.go:195] Run: openssl version
	I0116 02:57:58.116311  487926 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0116 02:57:58.116431  487926 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4754782.pem && ln -fs /usr/share/ca-certificates/4754782.pem /etc/ssl/certs/4754782.pem"
	I0116 02:57:58.127174  487926 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4754782.pem
	I0116 02:57:58.132521  487926 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan 16 02:44 /usr/share/ca-certificates/4754782.pem
	I0116 02:57:58.132592  487926 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 02:44 /usr/share/ca-certificates/4754782.pem
	I0116 02:57:58.132659  487926 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4754782.pem
	I0116 02:57:58.138655  487926 command_runner.go:130] > 3ec20f2e
	I0116 02:57:58.138963  487926 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4754782.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 02:57:58.149738  487926 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 02:57:58.160334  487926 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 02:57:58.165380  487926 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan 16 02:35 /usr/share/ca-certificates/minikubeCA.pem
	I0116 02:57:58.165414  487926 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 02:35 /usr/share/ca-certificates/minikubeCA.pem
	I0116 02:57:58.165483  487926 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 02:57:58.172200  487926 command_runner.go:130] > b5213941
	I0116 02:57:58.172305  487926 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 02:57:58.183015  487926 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/475478.pem && ln -fs /usr/share/ca-certificates/475478.pem /etc/ssl/certs/475478.pem"
	I0116 02:57:58.193756  487926 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/475478.pem
	I0116 02:57:58.198743  487926 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan 16 02:44 /usr/share/ca-certificates/475478.pem
	I0116 02:57:58.199081  487926 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 02:44 /usr/share/ca-certificates/475478.pem
	I0116 02:57:58.199144  487926 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/475478.pem
	I0116 02:57:58.205325  487926 command_runner.go:130] > 51391683
	I0116 02:57:58.205532  487926 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/475478.pem /etc/ssl/certs/51391683.0"
	I0116 02:57:58.216271  487926 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 02:57:58.220993  487926 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0116 02:57:58.221047  487926 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0116 02:57:58.221155  487926 ssh_runner.go:195] Run: crio config
	I0116 02:57:58.270139  487926 command_runner.go:130] ! time="2024-01-16 02:57:58.255554480Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0116 02:57:58.270314  487926 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0116 02:57:58.282799  487926 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0116 02:57:58.282826  487926 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0116 02:57:58.282833  487926 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0116 02:57:58.282837  487926 command_runner.go:130] > #
	I0116 02:57:58.282844  487926 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0116 02:57:58.282850  487926 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0116 02:57:58.282855  487926 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0116 02:57:58.282862  487926 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0116 02:57:58.282866  487926 command_runner.go:130] > # reload'.
	I0116 02:57:58.282873  487926 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0116 02:57:58.282881  487926 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0116 02:57:58.282889  487926 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0116 02:57:58.282895  487926 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0116 02:57:58.282899  487926 command_runner.go:130] > [crio]
	I0116 02:57:58.282905  487926 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0116 02:57:58.282913  487926 command_runner.go:130] > # containers images, in this directory.
	I0116 02:57:58.282919  487926 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0116 02:57:58.282943  487926 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0116 02:57:58.282954  487926 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0116 02:57:58.282968  487926 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0116 02:57:58.282981  487926 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0116 02:57:58.282992  487926 command_runner.go:130] > storage_driver = "overlay"
	I0116 02:57:58.283001  487926 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0116 02:57:58.283012  487926 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0116 02:57:58.283022  487926 command_runner.go:130] > storage_option = [
	I0116 02:57:58.283030  487926 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0116 02:57:58.283039  487926 command_runner.go:130] > ]
	I0116 02:57:58.283051  487926 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0116 02:57:58.283065  487926 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0116 02:57:58.283076  487926 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0116 02:57:58.283089  487926 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0116 02:57:58.283100  487926 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0116 02:57:58.283107  487926 command_runner.go:130] > # always happen on a node reboot
	I0116 02:57:58.283112  487926 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0116 02:57:58.283123  487926 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0116 02:57:58.283134  487926 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0116 02:57:58.283151  487926 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0116 02:57:58.283168  487926 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0116 02:57:58.283184  487926 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0116 02:57:58.283208  487926 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0116 02:57:58.283216  487926 command_runner.go:130] > # internal_wipe = true
	I0116 02:57:58.283225  487926 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0116 02:57:58.283239  487926 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0116 02:57:58.283252  487926 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0116 02:57:58.283262  487926 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0116 02:57:58.283276  487926 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0116 02:57:58.283285  487926 command_runner.go:130] > [crio.api]
	I0116 02:57:58.283294  487926 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0116 02:57:58.283305  487926 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0116 02:57:58.283317  487926 command_runner.go:130] > # IP address on which the stream server will listen.
	I0116 02:57:58.283325  487926 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0116 02:57:58.283336  487926 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0116 02:57:58.283348  487926 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0116 02:57:58.283356  487926 command_runner.go:130] > # stream_port = "0"
	I0116 02:57:58.283370  487926 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0116 02:57:58.283381  487926 command_runner.go:130] > # stream_enable_tls = false
	I0116 02:57:58.283395  487926 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0116 02:57:58.283405  487926 command_runner.go:130] > # stream_idle_timeout = ""
	I0116 02:57:58.283417  487926 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0116 02:57:58.283427  487926 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0116 02:57:58.283436  487926 command_runner.go:130] > # minutes.
	I0116 02:57:58.283448  487926 command_runner.go:130] > # stream_tls_cert = ""
	I0116 02:57:58.283460  487926 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0116 02:57:58.283474  487926 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0116 02:57:58.283484  487926 command_runner.go:130] > # stream_tls_key = ""
	I0116 02:57:58.283496  487926 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0116 02:57:58.283510  487926 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0116 02:57:58.283519  487926 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0116 02:57:58.283527  487926 command_runner.go:130] > # stream_tls_ca = ""
	I0116 02:57:58.283544  487926 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0116 02:57:58.283555  487926 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0116 02:57:58.283570  487926 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0116 02:57:58.283581  487926 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0116 02:57:58.283603  487926 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0116 02:57:58.283610  487926 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0116 02:57:58.283620  487926 command_runner.go:130] > [crio.runtime]
	I0116 02:57:58.283633  487926 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0116 02:57:58.283646  487926 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0116 02:57:58.283656  487926 command_runner.go:130] > # "nofile=1024:2048"
	I0116 02:57:58.283670  487926 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0116 02:57:58.283680  487926 command_runner.go:130] > # default_ulimits = [
	I0116 02:57:58.283689  487926 command_runner.go:130] > # ]
	I0116 02:57:58.283700  487926 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0116 02:57:58.283707  487926 command_runner.go:130] > # no_pivot = false
	I0116 02:57:58.283717  487926 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0116 02:57:58.283732  487926 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0116 02:57:58.283744  487926 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0116 02:57:58.283756  487926 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0116 02:57:58.283768  487926 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0116 02:57:58.283781  487926 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0116 02:57:58.283793  487926 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0116 02:57:58.283801  487926 command_runner.go:130] > # Cgroup setting for conmon
	I0116 02:57:58.283812  487926 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0116 02:57:58.283822  487926 command_runner.go:130] > conmon_cgroup = "pod"
	I0116 02:57:58.283836  487926 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0116 02:57:58.283848  487926 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0116 02:57:58.283861  487926 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0116 02:57:58.283871  487926 command_runner.go:130] > conmon_env = [
	I0116 02:57:58.283884  487926 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0116 02:57:58.283890  487926 command_runner.go:130] > ]
	I0116 02:57:58.283898  487926 command_runner.go:130] > # Additional environment variables to set for all the
	I0116 02:57:58.283914  487926 command_runner.go:130] > # containers. These are overridden if set in the
	I0116 02:57:58.283926  487926 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0116 02:57:58.283936  487926 command_runner.go:130] > # default_env = [
	I0116 02:57:58.283945  487926 command_runner.go:130] > # ]
	I0116 02:57:58.283958  487926 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0116 02:57:58.283968  487926 command_runner.go:130] > # selinux = false
	I0116 02:57:58.283981  487926 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0116 02:57:58.283992  487926 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0116 02:57:58.284003  487926 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0116 02:57:58.284014  487926 command_runner.go:130] > # seccomp_profile = ""
	I0116 02:57:58.284027  487926 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0116 02:57:58.284054  487926 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0116 02:57:58.284063  487926 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0116 02:57:58.284072  487926 command_runner.go:130] > # which might increase security.
	I0116 02:57:58.284079  487926 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0116 02:57:58.284090  487926 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0116 02:57:58.284101  487926 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0116 02:57:58.284114  487926 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0116 02:57:58.284128  487926 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0116 02:57:58.284140  487926 command_runner.go:130] > # This option supports live configuration reload.
	I0116 02:57:58.284151  487926 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0116 02:57:58.284165  487926 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0116 02:57:58.284175  487926 command_runner.go:130] > # the cgroup blockio controller.
	I0116 02:57:58.284183  487926 command_runner.go:130] > # blockio_config_file = ""
	I0116 02:57:58.284198  487926 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0116 02:57:58.284210  487926 command_runner.go:130] > # irqbalance daemon.
	I0116 02:57:58.284220  487926 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0116 02:57:58.284234  487926 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0116 02:57:58.284246  487926 command_runner.go:130] > # This option supports live configuration reload.
	I0116 02:57:58.284257  487926 command_runner.go:130] > # rdt_config_file = ""
	I0116 02:57:58.284269  487926 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0116 02:57:58.284277  487926 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0116 02:57:58.284287  487926 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0116 02:57:58.284297  487926 command_runner.go:130] > # separate_pull_cgroup = ""
	I0116 02:57:58.284311  487926 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0116 02:57:58.284325  487926 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0116 02:57:58.284336  487926 command_runner.go:130] > # will be added.
	I0116 02:57:58.284346  487926 command_runner.go:130] > # default_capabilities = [
	I0116 02:57:58.284356  487926 command_runner.go:130] > # 	"CHOWN",
	I0116 02:57:58.284365  487926 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0116 02:57:58.284374  487926 command_runner.go:130] > # 	"FSETID",
	I0116 02:57:58.284381  487926 command_runner.go:130] > # 	"FOWNER",
	I0116 02:57:58.284388  487926 command_runner.go:130] > # 	"SETGID",
	I0116 02:57:58.284398  487926 command_runner.go:130] > # 	"SETUID",
	I0116 02:57:58.284409  487926 command_runner.go:130] > # 	"SETPCAP",
	I0116 02:57:58.284416  487926 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0116 02:57:58.284426  487926 command_runner.go:130] > # 	"KILL",
	I0116 02:57:58.284435  487926 command_runner.go:130] > # ]
	I0116 02:57:58.284448  487926 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0116 02:57:58.284460  487926 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0116 02:57:58.284470  487926 command_runner.go:130] > # default_sysctls = [
	I0116 02:57:58.284477  487926 command_runner.go:130] > # ]
	I0116 02:57:58.284482  487926 command_runner.go:130] > # List of devices on the host that a
	I0116 02:57:58.284495  487926 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0116 02:57:58.284506  487926 command_runner.go:130] > # allowed_devices = [
	I0116 02:57:58.284513  487926 command_runner.go:130] > # 	"/dev/fuse",
	I0116 02:57:58.284522  487926 command_runner.go:130] > # ]
	I0116 02:57:58.284534  487926 command_runner.go:130] > # List of additional devices. specified as
	I0116 02:57:58.284549  487926 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0116 02:57:58.284561  487926 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0116 02:57:58.284592  487926 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0116 02:57:58.284609  487926 command_runner.go:130] > # additional_devices = [
	I0116 02:57:58.284615  487926 command_runner.go:130] > # ]
	I0116 02:57:58.284623  487926 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0116 02:57:58.284630  487926 command_runner.go:130] > # cdi_spec_dirs = [
	I0116 02:57:58.284636  487926 command_runner.go:130] > # 	"/etc/cdi",
	I0116 02:57:58.284643  487926 command_runner.go:130] > # 	"/var/run/cdi",
	I0116 02:57:58.284649  487926 command_runner.go:130] > # ]
	I0116 02:57:58.284659  487926 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0116 02:57:58.284667  487926 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0116 02:57:58.284671  487926 command_runner.go:130] > # Defaults to false.
	I0116 02:57:58.284675  487926 command_runner.go:130] > # device_ownership_from_security_context = false
	I0116 02:57:58.284685  487926 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0116 02:57:58.284696  487926 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0116 02:57:58.284704  487926 command_runner.go:130] > # hooks_dir = [
	I0116 02:57:58.284714  487926 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0116 02:57:58.284723  487926 command_runner.go:130] > # ]
	I0116 02:57:58.284733  487926 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0116 02:57:58.284746  487926 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0116 02:57:58.284755  487926 command_runner.go:130] > # its default mounts from the following two files:
	I0116 02:57:58.284759  487926 command_runner.go:130] > #
	I0116 02:57:58.284773  487926 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0116 02:57:58.284787  487926 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0116 02:57:58.284800  487926 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0116 02:57:58.284809  487926 command_runner.go:130] > #
	I0116 02:57:58.284821  487926 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0116 02:57:58.284834  487926 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0116 02:57:58.284850  487926 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0116 02:57:58.284858  487926 command_runner.go:130] > #      only add mounts it finds in this file.
	I0116 02:57:58.284864  487926 command_runner.go:130] > #
	I0116 02:57:58.284875  487926 command_runner.go:130] > # default_mounts_file = ""
	I0116 02:57:58.284887  487926 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0116 02:57:58.284899  487926 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0116 02:57:58.284909  487926 command_runner.go:130] > pids_limit = 1024
	I0116 02:57:58.284923  487926 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0116 02:57:58.284937  487926 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0116 02:57:58.284950  487926 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0116 02:57:58.284963  487926 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0116 02:57:58.284973  487926 command_runner.go:130] > # log_size_max = -1
	I0116 02:57:58.284988  487926 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0116 02:57:58.284999  487926 command_runner.go:130] > # log_to_journald = false
	I0116 02:57:58.285013  487926 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0116 02:57:58.285024  487926 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0116 02:57:58.285036  487926 command_runner.go:130] > # Path to directory for container attach sockets.
	I0116 02:57:58.285047  487926 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0116 02:57:58.285055  487926 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0116 02:57:58.285063  487926 command_runner.go:130] > # bind_mount_prefix = ""
	I0116 02:57:58.285077  487926 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0116 02:57:58.285087  487926 command_runner.go:130] > # read_only = false
	I0116 02:57:58.285098  487926 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0116 02:57:58.285115  487926 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0116 02:57:58.285122  487926 command_runner.go:130] > # live configuration reload.
	I0116 02:57:58.285130  487926 command_runner.go:130] > # log_level = "info"
	I0116 02:57:58.285139  487926 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0116 02:57:58.285148  487926 command_runner.go:130] > # This option supports live configuration reload.
	I0116 02:57:58.285158  487926 command_runner.go:130] > # log_filter = ""
	I0116 02:57:58.285165  487926 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0116 02:57:58.285176  487926 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0116 02:57:58.285187  487926 command_runner.go:130] > # separated by comma.
	I0116 02:57:58.285194  487926 command_runner.go:130] > # uid_mappings = ""
	I0116 02:57:58.285208  487926 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0116 02:57:58.285221  487926 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0116 02:57:58.285232  487926 command_runner.go:130] > # separated by comma.
	I0116 02:57:58.285241  487926 command_runner.go:130] > # gid_mappings = ""
	I0116 02:57:58.285254  487926 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0116 02:57:58.285267  487926 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0116 02:57:58.285280  487926 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0116 02:57:58.285291  487926 command_runner.go:130] > # minimum_mappable_uid = -1
	I0116 02:57:58.285301  487926 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0116 02:57:58.285315  487926 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0116 02:57:58.285328  487926 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0116 02:57:58.285339  487926 command_runner.go:130] > # minimum_mappable_gid = -1
	I0116 02:57:58.285352  487926 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0116 02:57:58.285363  487926 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0116 02:57:58.285375  487926 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0116 02:57:58.285386  487926 command_runner.go:130] > # ctr_stop_timeout = 30
	I0116 02:57:58.285397  487926 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0116 02:57:58.285409  487926 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0116 02:57:58.285422  487926 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0116 02:57:58.285433  487926 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0116 02:57:58.285445  487926 command_runner.go:130] > drop_infra_ctr = false
	I0116 02:57:58.285456  487926 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0116 02:57:58.285467  487926 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0116 02:57:58.285483  487926 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0116 02:57:58.285493  487926 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0116 02:57:58.285504  487926 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0116 02:57:58.285516  487926 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0116 02:57:58.285527  487926 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0116 02:57:58.285542  487926 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0116 02:57:58.285552  487926 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0116 02:57:58.285562  487926 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0116 02:57:58.285574  487926 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0116 02:57:58.285588  487926 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0116 02:57:58.285600  487926 command_runner.go:130] > # default_runtime = "runc"
	I0116 02:57:58.285612  487926 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0116 02:57:58.285628  487926 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0116 02:57:58.285645  487926 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0116 02:57:58.285653  487926 command_runner.go:130] > # creation as a file is not desired either.
	I0116 02:57:58.285666  487926 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0116 02:57:58.285678  487926 command_runner.go:130] > # the hostname is being managed dynamically.
	I0116 02:57:58.285689  487926 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0116 02:57:58.285698  487926 command_runner.go:130] > # ]
	I0116 02:57:58.285712  487926 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0116 02:57:58.285726  487926 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0116 02:57:58.285740  487926 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0116 02:57:58.285749  487926 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0116 02:57:58.285757  487926 command_runner.go:130] > #
	I0116 02:57:58.285769  487926 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0116 02:57:58.285780  487926 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0116 02:57:58.285790  487926 command_runner.go:130] > #  runtime_type = "oci"
	I0116 02:57:58.285801  487926 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0116 02:57:58.285812  487926 command_runner.go:130] > #  privileged_without_host_devices = false
	I0116 02:57:58.285823  487926 command_runner.go:130] > #  allowed_annotations = []
	I0116 02:57:58.285832  487926 command_runner.go:130] > # Where:
	I0116 02:57:58.285844  487926 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0116 02:57:58.285853  487926 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0116 02:57:58.285866  487926 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0116 02:57:58.285880  487926 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0116 02:57:58.285891  487926 command_runner.go:130] > #   in $PATH.
	I0116 02:57:58.285904  487926 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0116 02:57:58.285916  487926 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0116 02:57:58.285931  487926 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0116 02:57:58.285940  487926 command_runner.go:130] > #   state.
	I0116 02:57:58.285950  487926 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0116 02:57:58.285962  487926 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0116 02:57:58.285977  487926 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0116 02:57:58.285990  487926 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0116 02:57:58.286004  487926 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0116 02:57:58.286018  487926 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0116 02:57:58.286029  487926 command_runner.go:130] > #   The currently recognized values are:
	I0116 02:57:58.286041  487926 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0116 02:57:58.286054  487926 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0116 02:57:58.286068  487926 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0116 02:57:58.286081  487926 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0116 02:57:58.286096  487926 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0116 02:57:58.286109  487926 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0116 02:57:58.286122  487926 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0116 02:57:58.286132  487926 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0116 02:57:58.286143  487926 command_runner.go:130] > #   should be moved to the container's cgroup
	I0116 02:57:58.286153  487926 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0116 02:57:58.286168  487926 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0116 02:57:58.286178  487926 command_runner.go:130] > runtime_type = "oci"
	I0116 02:57:58.286188  487926 command_runner.go:130] > runtime_root = "/run/runc"
	I0116 02:57:58.286198  487926 command_runner.go:130] > runtime_config_path = ""
	I0116 02:57:58.286208  487926 command_runner.go:130] > monitor_path = ""
	I0116 02:57:58.286219  487926 command_runner.go:130] > monitor_cgroup = ""
	I0116 02:57:58.286227  487926 command_runner.go:130] > monitor_exec_cgroup = ""
	I0116 02:57:58.286236  487926 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0116 02:57:58.286247  487926 command_runner.go:130] > # running containers
	I0116 02:57:58.286258  487926 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0116 02:57:58.286271  487926 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0116 02:57:58.286304  487926 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0116 02:57:58.286316  487926 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0116 02:57:58.286328  487926 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0116 02:57:58.286340  487926 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0116 02:57:58.286349  487926 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0116 02:57:58.286361  487926 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0116 02:57:58.286372  487926 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0116 02:57:58.286383  487926 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0116 02:57:58.286397  487926 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0116 02:57:58.286407  487926 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0116 02:57:58.286418  487926 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0116 02:57:58.286434  487926 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0116 02:57:58.286450  487926 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0116 02:57:58.286463  487926 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0116 02:57:58.286480  487926 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0116 02:57:58.286496  487926 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0116 02:57:58.286508  487926 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0116 02:57:58.286519  487926 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0116 02:57:58.286525  487926 command_runner.go:130] > # Example:
	I0116 02:57:58.286536  487926 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0116 02:57:58.286548  487926 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0116 02:57:58.286559  487926 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0116 02:57:58.286572  487926 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0116 02:57:58.286581  487926 command_runner.go:130] > # cpuset = 0
	I0116 02:57:58.286589  487926 command_runner.go:130] > # cpushares = "0-1"
	I0116 02:57:58.286595  487926 command_runner.go:130] > # Where:
	I0116 02:57:58.286604  487926 command_runner.go:130] > # The workload name is workload-type.
	I0116 02:57:58.286614  487926 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0116 02:57:58.286626  487926 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0116 02:57:58.286639  487926 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0116 02:57:58.286654  487926 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0116 02:57:58.286667  487926 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0116 02:57:58.286676  487926 command_runner.go:130] > # 
	I0116 02:57:58.286690  487926 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0116 02:57:58.286699  487926 command_runner.go:130] > #
	I0116 02:57:58.286708  487926 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0116 02:57:58.286719  487926 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0116 02:57:58.286733  487926 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0116 02:57:58.286749  487926 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0116 02:57:58.286762  487926 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0116 02:57:58.286772  487926 command_runner.go:130] > [crio.image]
	I0116 02:57:58.286785  487926 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0116 02:57:58.286796  487926 command_runner.go:130] > # default_transport = "docker://"
	I0116 02:57:58.286808  487926 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0116 02:57:58.286818  487926 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0116 02:57:58.286827  487926 command_runner.go:130] > # global_auth_file = ""
	I0116 02:57:58.286839  487926 command_runner.go:130] > # The image used to instantiate infra containers.
	I0116 02:57:58.286852  487926 command_runner.go:130] > # This option supports live configuration reload.
	I0116 02:57:58.286864  487926 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0116 02:57:58.286878  487926 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0116 02:57:58.286891  487926 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0116 02:57:58.286902  487926 command_runner.go:130] > # This option supports live configuration reload.
	I0116 02:57:58.286909  487926 command_runner.go:130] > # pause_image_auth_file = ""
	I0116 02:57:58.286918  487926 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0116 02:57:58.286932  487926 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0116 02:57:58.286943  487926 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0116 02:57:58.286957  487926 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0116 02:57:58.286967  487926 command_runner.go:130] > # pause_command = "/pause"
	I0116 02:57:58.286980  487926 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0116 02:57:58.286994  487926 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0116 02:57:58.287005  487926 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0116 02:57:58.287017  487926 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0116 02:57:58.287030  487926 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0116 02:57:58.287038  487926 command_runner.go:130] > # signature_policy = ""
	I0116 02:57:58.287052  487926 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0116 02:57:58.287065  487926 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0116 02:57:58.287076  487926 command_runner.go:130] > # changing them here.
	I0116 02:57:58.287086  487926 command_runner.go:130] > # insecure_registries = [
	I0116 02:57:58.287095  487926 command_runner.go:130] > # ]
	I0116 02:57:58.287104  487926 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0116 02:57:58.287115  487926 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0116 02:57:58.287126  487926 command_runner.go:130] > # image_volumes = "mkdir"
	I0116 02:57:58.287136  487926 command_runner.go:130] > # Temporary directory to use for storing big files
	I0116 02:57:58.287146  487926 command_runner.go:130] > # big_files_temporary_dir = ""
	I0116 02:57:58.287163  487926 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0116 02:57:58.287173  487926 command_runner.go:130] > # CNI plugins.
	I0116 02:57:58.287180  487926 command_runner.go:130] > [crio.network]
	I0116 02:57:58.287191  487926 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0116 02:57:58.287200  487926 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0116 02:57:58.287210  487926 command_runner.go:130] > # cni_default_network = ""
	I0116 02:57:58.287224  487926 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0116 02:57:58.287236  487926 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0116 02:57:58.287248  487926 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0116 02:57:58.287258  487926 command_runner.go:130] > # plugin_dirs = [
	I0116 02:57:58.287268  487926 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0116 02:57:58.287277  487926 command_runner.go:130] > # ]
	I0116 02:57:58.287289  487926 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0116 02:57:58.287296  487926 command_runner.go:130] > [crio.metrics]
	I0116 02:57:58.287305  487926 command_runner.go:130] > # Globally enable or disable metrics support.
	I0116 02:57:58.287315  487926 command_runner.go:130] > enable_metrics = true
	I0116 02:57:58.287327  487926 command_runner.go:130] > # Specify enabled metrics collectors.
	I0116 02:57:58.287338  487926 command_runner.go:130] > # Per default all metrics are enabled.
	I0116 02:57:58.287351  487926 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0116 02:57:58.287365  487926 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0116 02:57:58.287378  487926 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0116 02:57:58.287385  487926 command_runner.go:130] > # metrics_collectors = [
	I0116 02:57:58.287391  487926 command_runner.go:130] > # 	"operations",
	I0116 02:57:58.287402  487926 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0116 02:57:58.287413  487926 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0116 02:57:58.287421  487926 command_runner.go:130] > # 	"operations_errors",
	I0116 02:57:58.287432  487926 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0116 02:57:58.287442  487926 command_runner.go:130] > # 	"image_pulls_by_name",
	I0116 02:57:58.287454  487926 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0116 02:57:58.287465  487926 command_runner.go:130] > # 	"image_pulls_failures",
	I0116 02:57:58.287475  487926 command_runner.go:130] > # 	"image_pulls_successes",
	I0116 02:57:58.287483  487926 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0116 02:57:58.287493  487926 command_runner.go:130] > # 	"image_layer_reuse",
	I0116 02:57:58.287504  487926 command_runner.go:130] > # 	"containers_oom_total",
	I0116 02:57:58.287514  487926 command_runner.go:130] > # 	"containers_oom",
	I0116 02:57:58.287522  487926 command_runner.go:130] > # 	"processes_defunct",
	I0116 02:57:58.287533  487926 command_runner.go:130] > # 	"operations_total",
	I0116 02:57:58.287544  487926 command_runner.go:130] > # 	"operations_latency_seconds",
	I0116 02:57:58.287555  487926 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0116 02:57:58.287566  487926 command_runner.go:130] > # 	"operations_errors_total",
	I0116 02:57:58.287576  487926 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0116 02:57:58.287584  487926 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0116 02:57:58.287589  487926 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0116 02:57:58.287599  487926 command_runner.go:130] > # 	"image_pulls_success_total",
	I0116 02:57:58.287611  487926 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0116 02:57:58.287622  487926 command_runner.go:130] > # 	"containers_oom_count_total",
	I0116 02:57:58.287631  487926 command_runner.go:130] > # ]
	I0116 02:57:58.287643  487926 command_runner.go:130] > # The port on which the metrics server will listen.
	I0116 02:57:58.287653  487926 command_runner.go:130] > # metrics_port = 9090
	I0116 02:57:58.287665  487926 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0116 02:57:58.287676  487926 command_runner.go:130] > # metrics_socket = ""
	I0116 02:57:58.287683  487926 command_runner.go:130] > # The certificate for the secure metrics server.
	I0116 02:57:58.287697  487926 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0116 02:57:58.287711  487926 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0116 02:57:58.287723  487926 command_runner.go:130] > # certificate on any modification event.
	I0116 02:57:58.287733  487926 command_runner.go:130] > # metrics_cert = ""
	I0116 02:57:58.287744  487926 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0116 02:57:58.287756  487926 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0116 02:57:58.287766  487926 command_runner.go:130] > # metrics_key = ""
	I0116 02:57:58.287776  487926 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0116 02:57:58.287783  487926 command_runner.go:130] > [crio.tracing]
	I0116 02:57:58.287792  487926 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0116 02:57:58.287803  487926 command_runner.go:130] > # enable_tracing = false
	I0116 02:57:58.287815  487926 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0116 02:57:58.287827  487926 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0116 02:57:58.287839  487926 command_runner.go:130] > # Number of samples to collect per million spans.
	I0116 02:57:58.287850  487926 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0116 02:57:58.287863  487926 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0116 02:57:58.287871  487926 command_runner.go:130] > [crio.stats]
	I0116 02:57:58.287878  487926 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0116 02:57:58.287890  487926 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0116 02:57:58.287902  487926 command_runner.go:130] > # stats_collection_period = 0
	I0116 02:57:58.287990  487926 cni.go:84] Creating CNI manager for ""
	I0116 02:57:58.288003  487926 cni.go:136] 2 nodes found, recommending kindnet
	I0116 02:57:58.288017  487926 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 02:57:58.288061  487926 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.32 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-405494 NodeName:multinode-405494-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.70"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.32 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0116 02:57:58.288228  487926 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.32
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-405494-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.32
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.70"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 02:57:58.288317  487926 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-405494-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.32
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-405494 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0116 02:57:58.288391  487926 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0116 02:57:58.299081  487926 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	I0116 02:57:58.299142  487926 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	
	Initiating transfer...
	I0116 02:57:58.299214  487926 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.4
	I0116 02:57:58.310581  487926 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256
	I0116 02:57:58.310616  487926 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/linux/amd64/v1.28.4/kubectl -> /var/lib/minikube/binaries/v1.28.4/kubectl
	I0116 02:57:58.310702  487926 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl
	I0116 02:57:58.310722  487926 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/17965-468241/.minikube/cache/linux/amd64/v1.28.4/kubeadm
	I0116 02:57:58.310722  487926 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/17965-468241/.minikube/cache/linux/amd64/v1.28.4/kubelet
	I0116 02:57:58.315194  487926 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0116 02:57:58.315356  487926 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0116 02:57:58.315390  487926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/cache/linux/amd64/v1.28.4/kubectl --> /var/lib/minikube/binaries/v1.28.4/kubectl (49885184 bytes)
	I0116 02:57:58.961589  487926 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/linux/amd64/v1.28.4/kubeadm -> /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0116 02:57:58.961686  487926 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0116 02:57:58.966797  487926 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0116 02:57:58.966840  487926 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0116 02:57:58.966870  487926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/cache/linux/amd64/v1.28.4/kubeadm --> /var/lib/minikube/binaries/v1.28.4/kubeadm (49102848 bytes)
	I0116 02:57:59.460936  487926 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 02:57:59.475417  487926 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/linux/amd64/v1.28.4/kubelet -> /var/lib/minikube/binaries/v1.28.4/kubelet
	I0116 02:57:59.475540  487926 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet
	I0116 02:57:59.480142  487926 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0116 02:57:59.480342  487926 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0116 02:57:59.480380  487926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/cache/linux/amd64/v1.28.4/kubelet --> /var/lib/minikube/binaries/v1.28.4/kubelet (110850048 bytes)
	I0116 02:58:00.039507  487926 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0116 02:58:00.049335  487926 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0116 02:58:00.066295  487926 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0116 02:58:00.083499  487926 ssh_runner.go:195] Run: grep 192.168.39.70	control-plane.minikube.internal$ /etc/hosts
	I0116 02:58:00.087397  487926 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.70	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 02:58:00.100073  487926 host.go:66] Checking if "multinode-405494" exists ...
	I0116 02:58:00.100394  487926 config.go:182] Loaded profile config "multinode-405494": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 02:58:00.100445  487926 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:58:00.100474  487926 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:58:00.115995  487926 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36797
	I0116 02:58:00.116609  487926 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:58:00.117106  487926 main.go:141] libmachine: Using API Version  1
	I0116 02:58:00.117130  487926 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:58:00.117508  487926 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:58:00.117760  487926 main.go:141] libmachine: (multinode-405494) Calling .DriverName
	I0116 02:58:00.117938  487926 start.go:304] JoinCluster: &{Name:multinode-405494 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.4 ClusterName:multinode-405494 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.70 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.32 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 02:58:00.118062  487926 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0116 02:58:00.118087  487926 main.go:141] libmachine: (multinode-405494) Calling .GetSSHHostname
	I0116 02:58:00.121357  487926 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 02:58:00.121761  487926 main.go:141] libmachine: (multinode-405494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:49:7b", ip: ""} in network mk-multinode-405494: {Iface:virbr1 ExpiryTime:2024-01-16 03:56:41 +0000 UTC Type:0 Mac:52:54:00:b0:49:7b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:multinode-405494 Clientid:01:52:54:00:b0:49:7b}
	I0116 02:58:00.121795  487926 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined IP address 192.168.39.70 and MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 02:58:00.121966  487926 main.go:141] libmachine: (multinode-405494) Calling .GetSSHPort
	I0116 02:58:00.122160  487926 main.go:141] libmachine: (multinode-405494) Calling .GetSSHKeyPath
	I0116 02:58:00.122326  487926 main.go:141] libmachine: (multinode-405494) Calling .GetSSHUsername
	I0116 02:58:00.122454  487926 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/multinode-405494/id_rsa Username:docker}
	I0116 02:58:00.296771  487926 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token mgx6yv.0d7129bb9e142mzv --discovery-token-ca-cert-hash sha256:ab75761e1119a1709a216b682cd7b1dcce0641115df5f236b54b16e4f66aa044 
	I0116 02:58:00.299286  487926 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.32 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0116 02:58:00.299340  487926 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token mgx6yv.0d7129bb9e142mzv --discovery-token-ca-cert-hash sha256:ab75761e1119a1709a216b682cd7b1dcce0641115df5f236b54b16e4f66aa044 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-405494-m02"
	I0116 02:58:00.348121  487926 command_runner.go:130] > [preflight] Running pre-flight checks
	I0116 02:58:00.506782  487926 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0116 02:58:00.506827  487926 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0116 02:58:00.548836  487926 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0116 02:58:00.548874  487926 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0116 02:58:00.548883  487926 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0116 02:58:00.678844  487926 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0116 02:58:02.700056  487926 command_runner.go:130] > This node has joined the cluster:
	I0116 02:58:02.700096  487926 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0116 02:58:02.700108  487926 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0116 02:58:02.700116  487926 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0116 02:58:02.701657  487926 command_runner.go:130] ! W0116 02:58:00.336538     822 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0116 02:58:02.701689  487926 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0116 02:58:02.701721  487926 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token mgx6yv.0d7129bb9e142mzv --discovery-token-ca-cert-hash sha256:ab75761e1119a1709a216b682cd7b1dcce0641115df5f236b54b16e4f66aa044 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-405494-m02": (2.402363939s)
	I0116 02:58:02.701772  487926 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0116 02:58:03.022079  487926 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0116 02:58:03.022197  487926 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=6e8fa5f64d0e7272be43ff25ed3826261f0a2578 minikube.k8s.io/name=multinode-405494 minikube.k8s.io/updated_at=2024_01_16T02_58_03_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:58:03.157308  487926 command_runner.go:130] > node/multinode-405494-m02 labeled
	I0116 02:58:03.159171  487926 start.go:306] JoinCluster complete in 3.041227463s
	I0116 02:58:03.159208  487926 cni.go:84] Creating CNI manager for ""
	I0116 02:58:03.159214  487926 cni.go:136] 2 nodes found, recommending kindnet
	I0116 02:58:03.159283  487926 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0116 02:58:03.165778  487926 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0116 02:58:03.165805  487926 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0116 02:58:03.165815  487926 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0116 02:58:03.165822  487926 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0116 02:58:03.165831  487926 command_runner.go:130] > Access: 2024-01-16 02:56:39.075162599 +0000
	I0116 02:58:03.165836  487926 command_runner.go:130] > Modify: 2023-12-28 22:53:36.000000000 +0000
	I0116 02:58:03.165841  487926 command_runner.go:130] > Change: 2024-01-16 02:56:37.085162599 +0000
	I0116 02:58:03.165845  487926 command_runner.go:130] >  Birth: -
	I0116 02:58:03.165901  487926 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0116 02:58:03.165916  487926 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0116 02:58:03.184310  487926 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0116 02:58:03.522452  487926 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0116 02:58:03.522494  487926 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0116 02:58:03.522504  487926 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0116 02:58:03.522510  487926 command_runner.go:130] > daemonset.apps/kindnet configured
	I0116 02:58:03.523058  487926 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17965-468241/kubeconfig
	I0116 02:58:03.523409  487926 kapi.go:59] client config for multinode-405494: &rest.Config{Host:"https://192.168.39.70:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17965-468241/.minikube/profiles/multinode-405494/client.crt", KeyFile:"/home/jenkins/minikube-integration/17965-468241/.minikube/profiles/multinode-405494/client.key", CAFile:"/home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19be0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0116 02:58:03.523761  487926 round_trippers.go:463] GET https://192.168.39.70:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0116 02:58:03.523781  487926 round_trippers.go:469] Request Headers:
	I0116 02:58:03.523793  487926 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:03.523802  487926 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:03.527680  487926 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:58:03.527708  487926 round_trippers.go:577] Response Headers:
	I0116 02:58:03.527719  487926 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:03 GMT
	I0116 02:58:03.527728  487926 round_trippers.go:580]     Audit-Id: 914a50bc-b752-440b-b55a-a2497f1c79d4
	I0116 02:58:03.527736  487926 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:03.527749  487926 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:03.527764  487926 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 02:58:03.527772  487926 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 02:58:03.527780  487926 round_trippers.go:580]     Content-Length: 291
	I0116 02:58:03.527915  487926 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"dd77c785-c90f-4789-97cb-f593b7a7a7e2","resourceVersion":"438","creationTimestamp":"2024-01-16T02:57:11Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0116 02:58:03.528091  487926 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-405494" context rescaled to 1 replicas
	I0116 02:58:03.528138  487926 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.39.32 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0116 02:58:03.530122  487926 out.go:177] * Verifying Kubernetes components...
	I0116 02:58:03.531642  487926 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 02:58:03.548823  487926 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17965-468241/kubeconfig
	I0116 02:58:03.549153  487926 kapi.go:59] client config for multinode-405494: &rest.Config{Host:"https://192.168.39.70:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17965-468241/.minikube/profiles/multinode-405494/client.crt", KeyFile:"/home/jenkins/minikube-integration/17965-468241/.minikube/profiles/multinode-405494/client.key", CAFile:"/home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19be0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0116 02:58:03.549472  487926 node_ready.go:35] waiting up to 6m0s for node "multinode-405494-m02" to be "Ready" ...
	I0116 02:58:03.549555  487926 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494-m02
	I0116 02:58:03.549564  487926 round_trippers.go:469] Request Headers:
	I0116 02:58:03.549572  487926 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:03.549579  487926 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:03.552321  487926 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:58:03.552347  487926 round_trippers.go:577] Response Headers:
	I0116 02:58:03.552357  487926 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:03.552366  487926 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:03.552374  487926 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 02:58:03.552379  487926 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 02:58:03.552384  487926 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:03 GMT
	I0116 02:58:03.552392  487926 round_trippers.go:580]     Audit-Id: caede791-ae35-40e8-a312-128365b1912b
	I0116 02:58:03.552695  487926 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494-m02","uid":"db05f602-4c14-49d7-93c1-517732722bbd","resourceVersion":"492","creationTimestamp":"2024-01-16T02:58:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_58_03_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:58:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3167 chars]
	I0116 02:58:04.050554  487926 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494-m02
	I0116 02:58:04.050581  487926 round_trippers.go:469] Request Headers:
	I0116 02:58:04.050590  487926 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:04.050596  487926 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:04.053534  487926 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:58:04.053569  487926 round_trippers.go:577] Response Headers:
	I0116 02:58:04.053581  487926 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 02:58:04.053591  487926 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:04 GMT
	I0116 02:58:04.053600  487926 round_trippers.go:580]     Audit-Id: 4524655e-9825-49f6-bdad-940006e8d9d7
	I0116 02:58:04.053648  487926 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:04.053659  487926 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:04.053666  487926 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 02:58:04.054103  487926 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494-m02","uid":"db05f602-4c14-49d7-93c1-517732722bbd","resourceVersion":"492","creationTimestamp":"2024-01-16T02:58:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_58_03_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:58:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3167 chars]
	I0116 02:58:04.549777  487926 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494-m02
	I0116 02:58:04.549805  487926 round_trippers.go:469] Request Headers:
	I0116 02:58:04.549827  487926 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:04.549833  487926 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:04.552957  487926 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:58:04.552984  487926 round_trippers.go:577] Response Headers:
	I0116 02:58:04.552995  487926 round_trippers.go:580]     Audit-Id: 32ee9f71-7d61-4eb7-afe2-83ca34a0e187
	I0116 02:58:04.553003  487926 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:04.553010  487926 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:04.553018  487926 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 02:58:04.553053  487926 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 02:58:04.553062  487926 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:04 GMT
	I0116 02:58:04.553427  487926 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494-m02","uid":"db05f602-4c14-49d7-93c1-517732722bbd","resourceVersion":"492","creationTimestamp":"2024-01-16T02:58:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_58_03_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:58:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3167 chars]
	I0116 02:58:05.050130  487926 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494-m02
	I0116 02:58:05.050161  487926 round_trippers.go:469] Request Headers:
	I0116 02:58:05.050170  487926 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:05.050176  487926 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:05.053018  487926 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:58:05.053046  487926 round_trippers.go:577] Response Headers:
	I0116 02:58:05.053059  487926 round_trippers.go:580]     Audit-Id: ad5d4d5c-a6c4-4193-8674-625635f576d8
	I0116 02:58:05.053068  487926 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:05.053077  487926 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:05.053085  487926 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 02:58:05.053093  487926 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 02:58:05.053102  487926 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:05 GMT
	I0116 02:58:05.053266  487926 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494-m02","uid":"db05f602-4c14-49d7-93c1-517732722bbd","resourceVersion":"492","creationTimestamp":"2024-01-16T02:58:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_58_03_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:58:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3167 chars]
	I0116 02:58:05.550536  487926 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494-m02
	I0116 02:58:05.550561  487926 round_trippers.go:469] Request Headers:
	I0116 02:58:05.550570  487926 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:05.550576  487926 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:05.554033  487926 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:58:05.554062  487926 round_trippers.go:577] Response Headers:
	I0116 02:58:05.554073  487926 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:05.554081  487926 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 02:58:05.554088  487926 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 02:58:05.554098  487926 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:05 GMT
	I0116 02:58:05.554106  487926 round_trippers.go:580]     Audit-Id: cfb02e66-bc7c-444c-ba88-af1d4c2da1c0
	I0116 02:58:05.554114  487926 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:05.554690  487926 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494-m02","uid":"db05f602-4c14-49d7-93c1-517732722bbd","resourceVersion":"492","creationTimestamp":"2024-01-16T02:58:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_58_03_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:58:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3167 chars]
	I0116 02:58:05.554949  487926 node_ready.go:58] node "multinode-405494-m02" has status "Ready":"False"
	I0116 02:58:06.050448  487926 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494-m02
	I0116 02:58:06.050479  487926 round_trippers.go:469] Request Headers:
	I0116 02:58:06.050491  487926 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:06.050500  487926 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:06.054809  487926 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 02:58:06.054832  487926 round_trippers.go:577] Response Headers:
	I0116 02:58:06.054840  487926 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 02:58:06.054846  487926 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 02:58:06.054851  487926 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:06 GMT
	I0116 02:58:06.054856  487926 round_trippers.go:580]     Audit-Id: 77b73abb-759b-4757-8196-ff21e7d9c71d
	I0116 02:58:06.054862  487926 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:06.054870  487926 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:06.055132  487926 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494-m02","uid":"db05f602-4c14-49d7-93c1-517732722bbd","resourceVersion":"492","creationTimestamp":"2024-01-16T02:58:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_58_03_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:58:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3167 chars]
	I0116 02:58:06.549823  487926 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494-m02
	I0116 02:58:06.549852  487926 round_trippers.go:469] Request Headers:
	I0116 02:58:06.549860  487926 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:06.549886  487926 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:06.553117  487926 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:58:06.553150  487926 round_trippers.go:577] Response Headers:
	I0116 02:58:06.553162  487926 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 02:58:06.553171  487926 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:06 GMT
	I0116 02:58:06.553178  487926 round_trippers.go:580]     Audit-Id: 2f5fe981-a486-440b-ba4a-b00d9966cf94
	I0116 02:58:06.553186  487926 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:06.553194  487926 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:06.553202  487926 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 02:58:06.553351  487926 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494-m02","uid":"db05f602-4c14-49d7-93c1-517732722bbd","resourceVersion":"492","creationTimestamp":"2024-01-16T02:58:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_58_03_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:58:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3167 chars]
	I0116 02:58:07.049748  487926 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494-m02
	I0116 02:58:07.049773  487926 round_trippers.go:469] Request Headers:
	I0116 02:58:07.049782  487926 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:07.049788  487926 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:07.055730  487926 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0116 02:58:07.055768  487926 round_trippers.go:577] Response Headers:
	I0116 02:58:07.055781  487926 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:07.055794  487926 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 02:58:07.055801  487926 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 02:58:07.055809  487926 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:07 GMT
	I0116 02:58:07.055816  487926 round_trippers.go:580]     Audit-Id: 917efc94-c845-4425-99db-04cb644f962c
	I0116 02:58:07.055823  487926 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:07.056041  487926 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494-m02","uid":"db05f602-4c14-49d7-93c1-517732722bbd","resourceVersion":"492","creationTimestamp":"2024-01-16T02:58:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_58_03_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:58:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3167 chars]
	I0116 02:58:07.550694  487926 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494-m02
	I0116 02:58:07.550721  487926 round_trippers.go:469] Request Headers:
	I0116 02:58:07.550730  487926 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:07.550736  487926 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:07.553834  487926 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:58:07.553860  487926 round_trippers.go:577] Response Headers:
	I0116 02:58:07.553880  487926 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 02:58:07.553888  487926 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 02:58:07.553895  487926 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:07 GMT
	I0116 02:58:07.553903  487926 round_trippers.go:580]     Audit-Id: b255bc6d-c8ef-473a-938d-fe190748e64e
	I0116 02:58:07.553911  487926 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:07.553923  487926 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:07.554113  487926 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494-m02","uid":"db05f602-4c14-49d7-93c1-517732722bbd","resourceVersion":"492","creationTimestamp":"2024-01-16T02:58:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_58_03_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:58:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3167 chars]
	I0116 02:58:08.050284  487926 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494-m02
	I0116 02:58:08.050313  487926 round_trippers.go:469] Request Headers:
	I0116 02:58:08.050325  487926 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:08.050336  487926 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:08.053380  487926 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:58:08.053406  487926 round_trippers.go:577] Response Headers:
	I0116 02:58:08.053418  487926 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 02:58:08.053425  487926 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 02:58:08.053433  487926 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:08 GMT
	I0116 02:58:08.053440  487926 round_trippers.go:580]     Audit-Id: e6bae490-da93-4b05-a73b-72c2185c3fb0
	I0116 02:58:08.053448  487926 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:08.053458  487926 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:08.053586  487926 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494-m02","uid":"db05f602-4c14-49d7-93c1-517732722bbd","resourceVersion":"492","creationTimestamp":"2024-01-16T02:58:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_58_03_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:58:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3167 chars]
	I0116 02:58:08.053877  487926 node_ready.go:58] node "multinode-405494-m02" has status "Ready":"False"
	I0116 02:58:08.550153  487926 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494-m02
	I0116 02:58:08.550192  487926 round_trippers.go:469] Request Headers:
	I0116 02:58:08.550204  487926 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:08.550214  487926 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:08.649452  487926 round_trippers.go:574] Response Status: 200 OK in 99 milliseconds
	I0116 02:58:08.649485  487926 round_trippers.go:577] Response Headers:
	I0116 02:58:08.649497  487926 round_trippers.go:580]     Audit-Id: 39839d13-454d-41e4-a607-b07aa52d1fe6
	I0116 02:58:08.649505  487926 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:08.649513  487926 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:08.649521  487926 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 02:58:08.649530  487926 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 02:58:08.649538  487926 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:08 GMT
	I0116 02:58:08.649790  487926 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494-m02","uid":"db05f602-4c14-49d7-93c1-517732722bbd","resourceVersion":"492","creationTimestamp":"2024-01-16T02:58:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_58_03_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:58:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3167 chars]
	I0116 02:58:09.050423  487926 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494-m02
	I0116 02:58:09.050449  487926 round_trippers.go:469] Request Headers:
	I0116 02:58:09.050459  487926 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:09.050465  487926 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:09.053480  487926 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:58:09.053514  487926 round_trippers.go:577] Response Headers:
	I0116 02:58:09.053524  487926 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 02:58:09.053532  487926 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 02:58:09.053544  487926 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:09 GMT
	I0116 02:58:09.053551  487926 round_trippers.go:580]     Audit-Id: c69792ff-2149-4505-b019-a3b117ab013e
	I0116 02:58:09.053558  487926 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:09.053565  487926 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:09.053866  487926 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494-m02","uid":"db05f602-4c14-49d7-93c1-517732722bbd","resourceVersion":"492","creationTimestamp":"2024-01-16T02:58:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_58_03_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:58:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3167 chars]
	I0116 02:58:09.550615  487926 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494-m02
	I0116 02:58:09.550645  487926 round_trippers.go:469] Request Headers:
	I0116 02:58:09.550654  487926 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:09.550660  487926 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:09.553830  487926 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:58:09.553855  487926 round_trippers.go:577] Response Headers:
	I0116 02:58:09.553864  487926 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:09 GMT
	I0116 02:58:09.553869  487926 round_trippers.go:580]     Audit-Id: 091e58df-b88f-43ef-8b4c-c13fa512af15
	I0116 02:58:09.553875  487926 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:09.553883  487926 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:09.553890  487926 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 02:58:09.553898  487926 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 02:58:09.554115  487926 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494-m02","uid":"db05f602-4c14-49d7-93c1-517732722bbd","resourceVersion":"492","creationTimestamp":"2024-01-16T02:58:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_58_03_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:58:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3167 chars]
	I0116 02:58:10.049847  487926 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494-m02
	I0116 02:58:10.049882  487926 round_trippers.go:469] Request Headers:
	I0116 02:58:10.049895  487926 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:10.049906  487926 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:10.054162  487926 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 02:58:10.054185  487926 round_trippers.go:577] Response Headers:
	I0116 02:58:10.054197  487926 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 02:58:10.054205  487926 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 02:58:10.054213  487926 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:10 GMT
	I0116 02:58:10.054220  487926 round_trippers.go:580]     Audit-Id: 2de69814-b9fa-4ae2-89cb-9ad808e9c3b1
	I0116 02:58:10.054235  487926 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:10.054248  487926 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:10.054716  487926 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494-m02","uid":"db05f602-4c14-49d7-93c1-517732722bbd","resourceVersion":"492","creationTimestamp":"2024-01-16T02:58:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_58_03_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:58:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3167 chars]
	I0116 02:58:10.055095  487926 node_ready.go:58] node "multinode-405494-m02" has status "Ready":"False"
	I0116 02:58:10.549810  487926 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494-m02
	I0116 02:58:10.549838  487926 round_trippers.go:469] Request Headers:
	I0116 02:58:10.549847  487926 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:10.549853  487926 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:10.553315  487926 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:58:10.553349  487926 round_trippers.go:577] Response Headers:
	I0116 02:58:10.553361  487926 round_trippers.go:580]     Audit-Id: 9545b7e6-9102-4e6c-9888-203ff429ab02
	I0116 02:58:10.553370  487926 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:10.553378  487926 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:10.553387  487926 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 02:58:10.553396  487926 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 02:58:10.553407  487926 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:10 GMT
	I0116 02:58:10.553810  487926 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494-m02","uid":"db05f602-4c14-49d7-93c1-517732722bbd","resourceVersion":"492","creationTimestamp":"2024-01-16T02:58:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_58_03_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:58:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3167 chars]
	I0116 02:58:11.050584  487926 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494-m02
	I0116 02:58:11.050619  487926 round_trippers.go:469] Request Headers:
	I0116 02:58:11.050627  487926 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:11.050633  487926 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:11.053606  487926 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:58:11.053633  487926 round_trippers.go:577] Response Headers:
	I0116 02:58:11.053644  487926 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 02:58:11.053652  487926 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:11 GMT
	I0116 02:58:11.053660  487926 round_trippers.go:580]     Audit-Id: 008f177e-fce6-4d9f-9daa-bcd9618e06d5
	I0116 02:58:11.053668  487926 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:11.053677  487926 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:11.053689  487926 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 02:58:11.054049  487926 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494-m02","uid":"db05f602-4c14-49d7-93c1-517732722bbd","resourceVersion":"514","creationTimestamp":"2024-01-16T02:58:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_58_03_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:58:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3253 chars]
	I0116 02:58:11.054434  487926 node_ready.go:49] node "multinode-405494-m02" has status "Ready":"True"
	I0116 02:58:11.054458  487926 node_ready.go:38] duration metric: took 7.504966773s waiting for node "multinode-405494-m02" to be "Ready" ...
	I0116 02:58:11.054467  487926 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 02:58:11.054554  487926 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods
	I0116 02:58:11.054562  487926 round_trippers.go:469] Request Headers:
	I0116 02:58:11.054569  487926 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:11.054575  487926 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:11.058254  487926 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:58:11.058276  487926 round_trippers.go:577] Response Headers:
	I0116 02:58:11.058286  487926 round_trippers.go:580]     Audit-Id: aca69365-d413-44ab-b1be-9b9d82df1c38
	I0116 02:58:11.058293  487926 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:11.058301  487926 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:11.058307  487926 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 02:58:11.058315  487926 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 02:58:11.058323  487926 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:11 GMT
	I0116 02:58:11.059422  487926 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"514"},"items":[{"metadata":{"name":"coredns-5dd5756b68-vwqvk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"096151e2-c59c-4dcf-bd29-2029901902c9","resourceVersion":"434","creationTimestamp":"2024-01-16T02:57:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c3967b3a-de7c-4ccd-af3a-dd2a9e8b71f8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c3967b3a-de7c-4ccd-af3a-dd2a9e8b71f8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 67324 chars]
	I0116 02:58:11.061591  487926 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-vwqvk" in "kube-system" namespace to be "Ready" ...
	I0116 02:58:11.061704  487926 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-vwqvk
	I0116 02:58:11.061715  487926 round_trippers.go:469] Request Headers:
	I0116 02:58:11.061728  487926 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:11.061738  487926 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:11.064200  487926 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:58:11.064229  487926 round_trippers.go:577] Response Headers:
	I0116 02:58:11.064240  487926 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:11.064249  487926 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 02:58:11.064257  487926 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 02:58:11.064265  487926 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:11 GMT
	I0116 02:58:11.064274  487926 round_trippers.go:580]     Audit-Id: 4f627c04-f322-4c7c-8411-fab96edc57ec
	I0116 02:58:11.064286  487926 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:11.064411  487926 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-vwqvk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"096151e2-c59c-4dcf-bd29-2029901902c9","resourceVersion":"434","creationTimestamp":"2024-01-16T02:57:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c3967b3a-de7c-4ccd-af3a-dd2a9e8b71f8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c3967b3a-de7c-4ccd-af3a-dd2a9e8b71f8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6264 chars]
	I0116 02:58:11.064891  487926 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494
	I0116 02:58:11.064906  487926 round_trippers.go:469] Request Headers:
	I0116 02:58:11.064914  487926 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:11.064920  487926 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:11.067321  487926 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:58:11.067344  487926 round_trippers.go:577] Response Headers:
	I0116 02:58:11.067353  487926 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:11.067361  487926 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:11.067374  487926 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 02:58:11.067385  487926 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 02:58:11.067394  487926 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:11 GMT
	I0116 02:58:11.067406  487926 round_trippers.go:580]     Audit-Id: 12cdb519-0798-445f-9e60-d098b410d50a
	I0116 02:58:11.067575  487926 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","resourceVersion":"417","creationTimestamp":"2024-01-16T02:57:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_57_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:57:07Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0116 02:58:11.068063  487926 pod_ready.go:92] pod "coredns-5dd5756b68-vwqvk" in "kube-system" namespace has status "Ready":"True"
	I0116 02:58:11.068093  487926 pod_ready.go:81] duration metric: took 6.475153ms waiting for pod "coredns-5dd5756b68-vwqvk" in "kube-system" namespace to be "Ready" ...
	I0116 02:58:11.068107  487926 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-405494" in "kube-system" namespace to be "Ready" ...
	I0116 02:58:11.068243  487926 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-405494
	I0116 02:58:11.068252  487926 round_trippers.go:469] Request Headers:
	I0116 02:58:11.068260  487926 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:11.068265  487926 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:11.070494  487926 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:58:11.070516  487926 round_trippers.go:577] Response Headers:
	I0116 02:58:11.070526  487926 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:11.070535  487926 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 02:58:11.070544  487926 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 02:58:11.070555  487926 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:11 GMT
	I0116 02:58:11.070576  487926 round_trippers.go:580]     Audit-Id: f1278e0d-1058-4ed4-b16d-55159249db92
	I0116 02:58:11.070587  487926 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:11.070835  487926 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-405494","namespace":"kube-system","uid":"3f839da7-c0c0-4546-8848-1557cbf50722","resourceVersion":"311","creationTimestamp":"2024-01-16T02:57:12Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.70:2379","kubernetes.io/config.hash":"3a9e67d0e87fe64d9531234ab850034d","kubernetes.io/config.mirror":"3a9e67d0e87fe64d9531234ab850034d","kubernetes.io/config.seen":"2024-01-16T02:57:11.711592151Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5843 chars]
	I0116 02:58:11.071301  487926 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494
	I0116 02:58:11.071320  487926 round_trippers.go:469] Request Headers:
	I0116 02:58:11.071327  487926 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:11.071333  487926 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:11.073561  487926 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:58:11.073583  487926 round_trippers.go:577] Response Headers:
	I0116 02:58:11.073593  487926 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:11.073602  487926 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 02:58:11.073610  487926 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 02:58:11.073617  487926 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:11 GMT
	I0116 02:58:11.073624  487926 round_trippers.go:580]     Audit-Id: d2f1444c-7cf5-4c08-bd93-ca61d6696788
	I0116 02:58:11.073631  487926 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:11.073868  487926 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","resourceVersion":"417","creationTimestamp":"2024-01-16T02:57:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_57_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:57:07Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0116 02:58:11.074186  487926 pod_ready.go:92] pod "etcd-multinode-405494" in "kube-system" namespace has status "Ready":"True"
	I0116 02:58:11.074206  487926 pod_ready.go:81] duration metric: took 6.02641ms waiting for pod "etcd-multinode-405494" in "kube-system" namespace to be "Ready" ...
	I0116 02:58:11.074222  487926 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-405494" in "kube-system" namespace to be "Ready" ...
	I0116 02:58:11.074306  487926 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-405494
	I0116 02:58:11.074316  487926 round_trippers.go:469] Request Headers:
	I0116 02:58:11.074323  487926 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:11.074328  487926 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:11.076625  487926 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:58:11.076651  487926 round_trippers.go:577] Response Headers:
	I0116 02:58:11.076660  487926 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:11.076689  487926 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:11.076702  487926 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 02:58:11.076711  487926 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 02:58:11.076722  487926 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:11 GMT
	I0116 02:58:11.076729  487926 round_trippers.go:580]     Audit-Id: 6156829d-7927-4297-8688-79b8d7fa857c
	I0116 02:58:11.076977  487926 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-405494","namespace":"kube-system","uid":"e242d3cf-6cf7-4b47-8d3e-a83e484554a1","resourceVersion":"316","creationTimestamp":"2024-01-16T02:57:09Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.70:8443","kubernetes.io/config.hash":"04bffd1a6d3ee0aae068c41e37830c9b","kubernetes.io/config.mirror":"04bffd1a6d3ee0aae068c41e37830c9b","kubernetes.io/config.seen":"2024-01-16T02:57:02.078602539Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7380 chars]
	I0116 02:58:11.077395  487926 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494
	I0116 02:58:11.077410  487926 round_trippers.go:469] Request Headers:
	I0116 02:58:11.077418  487926 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:11.077424  487926 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:11.079329  487926 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0116 02:58:11.079350  487926 round_trippers.go:577] Response Headers:
	I0116 02:58:11.079359  487926 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:11.079365  487926 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:11.079370  487926 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 02:58:11.079375  487926 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 02:58:11.079380  487926 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:11 GMT
	I0116 02:58:11.079384  487926 round_trippers.go:580]     Audit-Id: 0e6011e3-162d-4b14-b01a-9e9fd3c23b33
	I0116 02:58:11.079561  487926 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","resourceVersion":"417","creationTimestamp":"2024-01-16T02:57:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_57_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:57:07Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0116 02:58:11.079944  487926 pod_ready.go:92] pod "kube-apiserver-multinode-405494" in "kube-system" namespace has status "Ready":"True"
	I0116 02:58:11.079967  487926 pod_ready.go:81] duration metric: took 5.735441ms waiting for pod "kube-apiserver-multinode-405494" in "kube-system" namespace to be "Ready" ...
	I0116 02:58:11.079979  487926 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-405494" in "kube-system" namespace to be "Ready" ...
	I0116 02:58:11.080132  487926 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-405494
	I0116 02:58:11.080146  487926 round_trippers.go:469] Request Headers:
	I0116 02:58:11.080157  487926 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:11.080167  487926 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:11.082518  487926 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:58:11.082538  487926 round_trippers.go:577] Response Headers:
	I0116 02:58:11.082547  487926 round_trippers.go:580]     Audit-Id: 2f8206fe-c3be-4552-a8bc-fd8921a13696
	I0116 02:58:11.082553  487926 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:11.082558  487926 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:11.082563  487926 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 02:58:11.082568  487926 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 02:58:11.082580  487926 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:11 GMT
	I0116 02:58:11.082883  487926 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-405494","namespace":"kube-system","uid":"0833b412-8909-4660-8e16-19701683358e","resourceVersion":"319","creationTimestamp":"2024-01-16T02:57:12Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"9eb78063d6e219f3cc5940494bdab4b2","kubernetes.io/config.mirror":"9eb78063d6e219f3cc5940494bdab4b2","kubernetes.io/config.seen":"2024-01-16T02:57:11.711589408Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6950 chars]
	I0116 02:58:11.083443  487926 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494
	I0116 02:58:11.083462  487926 round_trippers.go:469] Request Headers:
	I0116 02:58:11.083471  487926 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:11.083481  487926 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:11.088981  487926 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0116 02:58:11.089003  487926 round_trippers.go:577] Response Headers:
	I0116 02:58:11.089012  487926 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 02:58:11.089020  487926 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:11 GMT
	I0116 02:58:11.089027  487926 round_trippers.go:580]     Audit-Id: 7983b906-5575-4770-99cf-f89222d0480e
	I0116 02:58:11.089033  487926 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:11.089038  487926 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:11.089048  487926 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 02:58:11.089246  487926 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","resourceVersion":"417","creationTimestamp":"2024-01-16T02:57:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_57_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:57:07Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0116 02:58:11.089675  487926 pod_ready.go:92] pod "kube-controller-manager-multinode-405494" in "kube-system" namespace has status "Ready":"True"
	I0116 02:58:11.089704  487926 pod_ready.go:81] duration metric: took 9.70967ms waiting for pod "kube-controller-manager-multinode-405494" in "kube-system" namespace to be "Ready" ...
	I0116 02:58:11.089718  487926 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gg8kv" in "kube-system" namespace to be "Ready" ...
	I0116 02:58:11.250693  487926 request.go:629] Waited for 160.902663ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gg8kv
	I0116 02:58:11.250760  487926 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gg8kv
	I0116 02:58:11.250766  487926 round_trippers.go:469] Request Headers:
	I0116 02:58:11.250774  487926 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:11.250780  487926 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:11.254179  487926 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:58:11.254205  487926 round_trippers.go:577] Response Headers:
	I0116 02:58:11.254212  487926 round_trippers.go:580]     Audit-Id: 35250293-6e98-447f-b1a4-00affd310578
	I0116 02:58:11.254220  487926 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:11.254228  487926 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:11.254236  487926 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 02:58:11.254253  487926 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 02:58:11.254260  487926 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:11 GMT
	I0116 02:58:11.254530  487926 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-gg8kv","generateName":"kube-proxy-","namespace":"kube-system","uid":"32841b88-1b06-46ed-b4ce-f73301ec0a85","resourceVersion":"407","creationTimestamp":"2024-01-16T02:57:23Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"fdad84fd-2af6-4360-a899-0f70f257935c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fdad84fd-2af6-4360-a899-0f70f257935c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5514 chars]
	I0116 02:58:11.451309  487926 request.go:629] Waited for 196.154161ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/nodes/multinode-405494
	I0116 02:58:11.451374  487926 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494
	I0116 02:58:11.451392  487926 round_trippers.go:469] Request Headers:
	I0116 02:58:11.451400  487926 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:11.451409  487926 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:11.454327  487926 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:58:11.454364  487926 round_trippers.go:577] Response Headers:
	I0116 02:58:11.454376  487926 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 02:58:11.454386  487926 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 02:58:11.454394  487926 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:11 GMT
	I0116 02:58:11.454407  487926 round_trippers.go:580]     Audit-Id: c1a18eff-4cfc-4746-bfb4-e55eee81f0e1
	I0116 02:58:11.454415  487926 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:11.454429  487926 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:11.454607  487926 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","resourceVersion":"417","creationTimestamp":"2024-01-16T02:57:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_57_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:57:07Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0116 02:58:11.454979  487926 pod_ready.go:92] pod "kube-proxy-gg8kv" in "kube-system" namespace has status "Ready":"True"
	I0116 02:58:11.455005  487926 pod_ready.go:81] duration metric: took 365.275378ms waiting for pod "kube-proxy-gg8kv" in "kube-system" namespace to be "Ready" ...
	I0116 02:58:11.455025  487926 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-m46rb" in "kube-system" namespace to be "Ready" ...
	I0116 02:58:11.650881  487926 request.go:629] Waited for 195.749082ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m46rb
	I0116 02:58:11.650963  487926 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m46rb
	I0116 02:58:11.650968  487926 round_trippers.go:469] Request Headers:
	I0116 02:58:11.650976  487926 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:11.650983  487926 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:11.654494  487926 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:58:11.654527  487926 round_trippers.go:577] Response Headers:
	I0116 02:58:11.654537  487926 round_trippers.go:580]     Audit-Id: 6a3220ac-926a-4718-b919-57c17370882f
	I0116 02:58:11.654545  487926 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:11.654560  487926 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:11.654569  487926 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 02:58:11.654579  487926 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 02:58:11.654590  487926 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:11 GMT
	I0116 02:58:11.654845  487926 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-m46rb","generateName":"kube-proxy-","namespace":"kube-system","uid":"960fb4d4-836f-42c5-9d56-03daae9f5a12","resourceVersion":"501","creationTimestamp":"2024-01-16T02:58:02Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"fdad84fd-2af6-4360-a899-0f70f257935c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:58:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fdad84fd-2af6-4360-a899-0f70f257935c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5522 chars]
	I0116 02:58:11.850685  487926 request.go:629] Waited for 195.353902ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/nodes/multinode-405494-m02
	I0116 02:58:11.850769  487926 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494-m02
	I0116 02:58:11.850774  487926 round_trippers.go:469] Request Headers:
	I0116 02:58:11.850783  487926 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:11.850801  487926 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:11.853829  487926 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:58:11.853865  487926 round_trippers.go:577] Response Headers:
	I0116 02:58:11.853877  487926 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 02:58:11.853891  487926 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:11 GMT
	I0116 02:58:11.853913  487926 round_trippers.go:580]     Audit-Id: 0f4c93ec-bf44-402e-be34-5dae9cf1a866
	I0116 02:58:11.853924  487926 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:11.853932  487926 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:11.853940  487926 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 02:58:11.854170  487926 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494-m02","uid":"db05f602-4c14-49d7-93c1-517732722bbd","resourceVersion":"514","creationTimestamp":"2024-01-16T02:58:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_58_03_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:58:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3253 chars]
	I0116 02:58:11.854465  487926 pod_ready.go:92] pod "kube-proxy-m46rb" in "kube-system" namespace has status "Ready":"True"
	I0116 02:58:11.854486  487926 pod_ready.go:81] duration metric: took 399.449259ms waiting for pod "kube-proxy-m46rb" in "kube-system" namespace to be "Ready" ...
	I0116 02:58:11.854500  487926 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-405494" in "kube-system" namespace to be "Ready" ...
	I0116 02:58:12.051663  487926 request.go:629] Waited for 197.075718ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-405494
	I0116 02:58:12.051752  487926 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-405494
	I0116 02:58:12.051758  487926 round_trippers.go:469] Request Headers:
	I0116 02:58:12.051766  487926 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:12.051780  487926 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:12.054745  487926 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:58:12.054776  487926 round_trippers.go:577] Response Headers:
	I0116 02:58:12.054787  487926 round_trippers.go:580]     Audit-Id: 6ee4468f-c223-4e1a-8797-bf85b93ca960
	I0116 02:58:12.054797  487926 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:12.054806  487926 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:12.054821  487926 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 02:58:12.054828  487926 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 02:58:12.054833  487926 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:12 GMT
	I0116 02:58:12.055040  487926 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-405494","namespace":"kube-system","uid":"70c980cb-4ff9-45f5-960f-d8afa355229c","resourceVersion":"313","creationTimestamp":"2024-01-16T02:57:11Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"65069d20830c0b10a3d28746871e48c2","kubernetes.io/config.mirror":"65069d20830c0b10a3d28746871e48c2","kubernetes.io/config.seen":"2024-01-16T02:57:02.078604553Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4680 chars]
	I0116 02:58:12.250856  487926 request.go:629] Waited for 195.317145ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/nodes/multinode-405494
	I0116 02:58:12.250944  487926 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494
	I0116 02:58:12.250952  487926 round_trippers.go:469] Request Headers:
	I0116 02:58:12.250964  487926 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:12.250985  487926 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:12.253950  487926 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:58:12.253986  487926 round_trippers.go:577] Response Headers:
	I0116 02:58:12.253995  487926 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:12.254000  487926 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 02:58:12.254005  487926 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 02:58:12.254010  487926 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:12 GMT
	I0116 02:58:12.254016  487926 round_trippers.go:580]     Audit-Id: 8898f36b-d587-4344-97f5-13df3cf30a90
	I0116 02:58:12.254025  487926 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:12.254231  487926 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","resourceVersion":"417","creationTimestamp":"2024-01-16T02:57:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_57_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:57:07Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0116 02:58:12.254634  487926 pod_ready.go:92] pod "kube-scheduler-multinode-405494" in "kube-system" namespace has status "Ready":"True"
	I0116 02:58:12.254659  487926 pod_ready.go:81] duration metric: took 400.150779ms waiting for pod "kube-scheduler-multinode-405494" in "kube-system" namespace to be "Ready" ...
	I0116 02:58:12.254669  487926 pod_ready.go:38] duration metric: took 1.200182595s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 02:58:12.254685  487926 system_svc.go:44] waiting for kubelet service to be running ....
	I0116 02:58:12.254746  487926 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 02:58:12.270715  487926 system_svc.go:56] duration metric: took 16.019166ms WaitForService to wait for kubelet.
	I0116 02:58:12.270751  487926 kubeadm.go:581] duration metric: took 8.742584963s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0116 02:58:12.270780  487926 node_conditions.go:102] verifying NodePressure condition ...
	I0116 02:58:12.451117  487926 request.go:629] Waited for 180.248372ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/nodes
	I0116 02:58:12.451206  487926 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes
	I0116 02:58:12.451211  487926 round_trippers.go:469] Request Headers:
	I0116 02:58:12.451218  487926 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:58:12.451225  487926 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 02:58:12.454291  487926 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:58:12.454313  487926 round_trippers.go:577] Response Headers:
	I0116 02:58:12.454323  487926 round_trippers.go:580]     Audit-Id: 713e11c5-343c-4005-a531-08e4af352e54
	I0116 02:58:12.454330  487926 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:58:12.454337  487926 round_trippers.go:580]     Content-Type: application/json
	I0116 02:58:12.454358  487926 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 02:58:12.454365  487926 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 02:58:12.454373  487926 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:58:12 GMT
	I0116 02:58:12.454581  487926 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"515"},"items":[{"metadata":{"name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","resourceVersion":"417","creationTimestamp":"2024-01-16T02:57:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_57_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 10196 chars]
	I0116 02:58:12.455218  487926 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 02:58:12.455244  487926 node_conditions.go:123] node cpu capacity is 2
	I0116 02:58:12.455257  487926 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 02:58:12.455263  487926 node_conditions.go:123] node cpu capacity is 2
	I0116 02:58:12.455269  487926 node_conditions.go:105] duration metric: took 184.482617ms to run NodePressure ...
	I0116 02:58:12.455288  487926 start.go:228] waiting for startup goroutines ...
	I0116 02:58:12.455326  487926 start.go:242] writing updated cluster config ...
	I0116 02:58:12.455633  487926 ssh_runner.go:195] Run: rm -f paused
	I0116 02:58:12.509408  487926 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0116 02:58:12.511841  487926 out.go:177] * Done! kubectl is now configured to use "multinode-405494" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Tue 2024-01-16 02:56:37 UTC, ends at Tue 2024-01-16 02:58:19 UTC. --
	Jan 16 02:58:19 multinode-405494 crio[714]: time="2024-01-16 02:58:19.563983891Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705373899563971105,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=34b033b4-e266-4c1d-be3f-e30e06f9376b name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 02:58:19 multinode-405494 crio[714]: time="2024-01-16 02:58:19.564504906Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=9d59e832-480b-4a3d-9210-1cea5639034c name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 02:58:19 multinode-405494 crio[714]: time="2024-01-16 02:58:19.564580168Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=9d59e832-480b-4a3d-9210-1cea5639034c name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 02:58:19 multinode-405494 crio[714]: time="2024-01-16 02:58:19.564780920Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1cedfc16d4a48868d3c0050c99f23cccf77d39bc5e1fe8938002eb54639230aa,PodSandboxId:710cb9e863bd30fbeb7f72cd2f47d45689f09fea049aa1f519d56b13f64eb209,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1705373895371507588,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-r9bv6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 73a7a6a1-28ed-452e-8073-025f2e1289be,},Annotations:map[string]string{io.kubernetes.container.hash: 4950379d,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9279f7c152a4b92ff8e9f0d96821d176cbc7c177fc06789227dc233cae7b9707,PodSandboxId:067fb5a119a7eac56ede22899e63ec1d798d977fb328765e2209ca501c7b0bae,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705373850195459691,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-vwqvk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 096151e2-c59c-4dcf-bd29-2029901902c9,},Annotations:map[string]string{io.kubernetes.container.hash: 7c9940b4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10aa4b900ac161e778ef103d44df239ee732a732825292585710a88df0abb3e1,PodSandboxId:2abf04dddaa5ba8a802851dd56010c72c1fe0b3ab3739d134ed26009355f4266,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705373849900639853,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: c6f12cfa-46b3-4840-a7e2-258c063a19c2,},Annotations:map[string]string{io.kubernetes.container.hash: 760f1fb4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6211ee80ab9669f7472975a29777f1fa9e315058a863684eaeb726853c0df800,PodSandboxId:fd2e2fbee0fc1c83af79d02f9bb2d6b650e3402101a68db5bf0a6b589a73cbc6,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1705373847166848946,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8t86n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 4d421823-26dd-467d-94d4-28387c8e3793,},Annotations:map[string]string{io.kubernetes.container.hash: 2e973eea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75bd73864540d9b9be16bfd7f246735aa0b9a7fd4f7520535d4c44a252b6a7ea,PodSandboxId:aea2230fd7d55a900837034cca85ef7d637ce669967efa4dca3b1af7d8dcef68,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705373844959473452,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gg8kv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32841b88-1b06-46ed-b4ce-f73301
ec0a85,},Annotations:map[string]string{io.kubernetes.container.hash: 3089e760,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1623b43e55e398342cefe9d39b85cda1dc7389375c03eb0e58ae3953c9e72bb1,PodSandboxId:9eb58dc548746ebb0861393e651105b5afa99af11d999d989a3c7bd67995ee55,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705373823943534615,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-405494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a9e67d0e87fe64d9531234ab850034d,},Annotations:map[string]string{io.kubernetes
.container.hash: 56e72d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eaae4ce9add621f9432f2a01b97af9995d74e665e458a6e5354bf9342946bcfe,PodSandboxId:8f2d0e776284cadd37e5258b1bee7e9ef05742bebc8c4b052d4a4261640cc056,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705373823445509910,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-405494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65069d20830c0b10a3d28746871e48c2,},Annotations:map[string]string{io.kubernetes.container.has
h: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e0595e6d73ec6792d98147ead97a7a9dfcca5c25fd8ea3c04cf026509492ee6,PodSandboxId:07f2eb997c1bc6439f3c9177fdf72f656d65147bbce10175cb471081b30cc0f1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705373823368087770,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-405494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9eb78063d6e219f3cc5940494bdab4b2,},Annotations:map[string]string{io.
kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:537fb6a84e23775f1b592ea5040f1648a20bc4b8a721687aae38b405643997cf,PodSandboxId:bc5b232759b82e2ffc0f87c3ffb73bb2b07c01ab385b9651f709212ea745bbcf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705373823223541048,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-405494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04bffd1a6d3ee0aae068c41e37830c9b,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: aaf37b8e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=9d59e832-480b-4a3d-9210-1cea5639034c name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 02:58:19 multinode-405494 crio[714]: time="2024-01-16 02:58:19.609390767Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=3e0f4599-5970-4662-9c6c-47af639cf6a4 name=/runtime.v1.RuntimeService/Version
	Jan 16 02:58:19 multinode-405494 crio[714]: time="2024-01-16 02:58:19.609477032Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=3e0f4599-5970-4662-9c6c-47af639cf6a4 name=/runtime.v1.RuntimeService/Version
	Jan 16 02:58:19 multinode-405494 crio[714]: time="2024-01-16 02:58:19.611570159Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=b2ba0101-1a1b-4299-bf33-18c55320782e name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 02:58:19 multinode-405494 crio[714]: time="2024-01-16 02:58:19.611954323Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705373899611940345,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=b2ba0101-1a1b-4299-bf33-18c55320782e name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 02:58:19 multinode-405494 crio[714]: time="2024-01-16 02:58:19.612609075Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4869a3ea-b403-4daa-885d-9fd0c87c6b7a name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 02:58:19 multinode-405494 crio[714]: time="2024-01-16 02:58:19.612744977Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4869a3ea-b403-4daa-885d-9fd0c87c6b7a name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 02:58:19 multinode-405494 crio[714]: time="2024-01-16 02:58:19.612932958Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1cedfc16d4a48868d3c0050c99f23cccf77d39bc5e1fe8938002eb54639230aa,PodSandboxId:710cb9e863bd30fbeb7f72cd2f47d45689f09fea049aa1f519d56b13f64eb209,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1705373895371507588,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-r9bv6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 73a7a6a1-28ed-452e-8073-025f2e1289be,},Annotations:map[string]string{io.kubernetes.container.hash: 4950379d,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9279f7c152a4b92ff8e9f0d96821d176cbc7c177fc06789227dc233cae7b9707,PodSandboxId:067fb5a119a7eac56ede22899e63ec1d798d977fb328765e2209ca501c7b0bae,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705373850195459691,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-vwqvk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 096151e2-c59c-4dcf-bd29-2029901902c9,},Annotations:map[string]string{io.kubernetes.container.hash: 7c9940b4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10aa4b900ac161e778ef103d44df239ee732a732825292585710a88df0abb3e1,PodSandboxId:2abf04dddaa5ba8a802851dd56010c72c1fe0b3ab3739d134ed26009355f4266,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705373849900639853,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: c6f12cfa-46b3-4840-a7e2-258c063a19c2,},Annotations:map[string]string{io.kubernetes.container.hash: 760f1fb4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6211ee80ab9669f7472975a29777f1fa9e315058a863684eaeb726853c0df800,PodSandboxId:fd2e2fbee0fc1c83af79d02f9bb2d6b650e3402101a68db5bf0a6b589a73cbc6,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1705373847166848946,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8t86n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 4d421823-26dd-467d-94d4-28387c8e3793,},Annotations:map[string]string{io.kubernetes.container.hash: 2e973eea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75bd73864540d9b9be16bfd7f246735aa0b9a7fd4f7520535d4c44a252b6a7ea,PodSandboxId:aea2230fd7d55a900837034cca85ef7d637ce669967efa4dca3b1af7d8dcef68,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705373844959473452,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gg8kv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32841b88-1b06-46ed-b4ce-f73301
ec0a85,},Annotations:map[string]string{io.kubernetes.container.hash: 3089e760,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1623b43e55e398342cefe9d39b85cda1dc7389375c03eb0e58ae3953c9e72bb1,PodSandboxId:9eb58dc548746ebb0861393e651105b5afa99af11d999d989a3c7bd67995ee55,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705373823943534615,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-405494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a9e67d0e87fe64d9531234ab850034d,},Annotations:map[string]string{io.kubernetes
.container.hash: 56e72d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eaae4ce9add621f9432f2a01b97af9995d74e665e458a6e5354bf9342946bcfe,PodSandboxId:8f2d0e776284cadd37e5258b1bee7e9ef05742bebc8c4b052d4a4261640cc056,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705373823445509910,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-405494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65069d20830c0b10a3d28746871e48c2,},Annotations:map[string]string{io.kubernetes.container.has
h: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e0595e6d73ec6792d98147ead97a7a9dfcca5c25fd8ea3c04cf026509492ee6,PodSandboxId:07f2eb997c1bc6439f3c9177fdf72f656d65147bbce10175cb471081b30cc0f1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705373823368087770,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-405494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9eb78063d6e219f3cc5940494bdab4b2,},Annotations:map[string]string{io.
kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:537fb6a84e23775f1b592ea5040f1648a20bc4b8a721687aae38b405643997cf,PodSandboxId:bc5b232759b82e2ffc0f87c3ffb73bb2b07c01ab385b9651f709212ea745bbcf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705373823223541048,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-405494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04bffd1a6d3ee0aae068c41e37830c9b,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: aaf37b8e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4869a3ea-b403-4daa-885d-9fd0c87c6b7a name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 02:58:19 multinode-405494 crio[714]: time="2024-01-16 02:58:19.661467634Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=bfefaae7-9542-4d00-a47d-46895754906f name=/runtime.v1.RuntimeService/Version
	Jan 16 02:58:19 multinode-405494 crio[714]: time="2024-01-16 02:58:19.661639034Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=bfefaae7-9542-4d00-a47d-46895754906f name=/runtime.v1.RuntimeService/Version
	Jan 16 02:58:19 multinode-405494 crio[714]: time="2024-01-16 02:58:19.662983083Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=68b57139-2946-4b57-aa21-8909fdf171e2 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 02:58:19 multinode-405494 crio[714]: time="2024-01-16 02:58:19.663457899Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705373899663442739,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=68b57139-2946-4b57-aa21-8909fdf171e2 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 02:58:19 multinode-405494 crio[714]: time="2024-01-16 02:58:19.663907343Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4124074f-ddf2-4a2e-ba70-374f6a32c4f4 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 02:58:19 multinode-405494 crio[714]: time="2024-01-16 02:58:19.663987881Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4124074f-ddf2-4a2e-ba70-374f6a32c4f4 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 02:58:19 multinode-405494 crio[714]: time="2024-01-16 02:58:19.664275125Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1cedfc16d4a48868d3c0050c99f23cccf77d39bc5e1fe8938002eb54639230aa,PodSandboxId:710cb9e863bd30fbeb7f72cd2f47d45689f09fea049aa1f519d56b13f64eb209,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1705373895371507588,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-r9bv6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 73a7a6a1-28ed-452e-8073-025f2e1289be,},Annotations:map[string]string{io.kubernetes.container.hash: 4950379d,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9279f7c152a4b92ff8e9f0d96821d176cbc7c177fc06789227dc233cae7b9707,PodSandboxId:067fb5a119a7eac56ede22899e63ec1d798d977fb328765e2209ca501c7b0bae,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705373850195459691,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-vwqvk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 096151e2-c59c-4dcf-bd29-2029901902c9,},Annotations:map[string]string{io.kubernetes.container.hash: 7c9940b4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10aa4b900ac161e778ef103d44df239ee732a732825292585710a88df0abb3e1,PodSandboxId:2abf04dddaa5ba8a802851dd56010c72c1fe0b3ab3739d134ed26009355f4266,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705373849900639853,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: c6f12cfa-46b3-4840-a7e2-258c063a19c2,},Annotations:map[string]string{io.kubernetes.container.hash: 760f1fb4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6211ee80ab9669f7472975a29777f1fa9e315058a863684eaeb726853c0df800,PodSandboxId:fd2e2fbee0fc1c83af79d02f9bb2d6b650e3402101a68db5bf0a6b589a73cbc6,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1705373847166848946,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8t86n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 4d421823-26dd-467d-94d4-28387c8e3793,},Annotations:map[string]string{io.kubernetes.container.hash: 2e973eea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75bd73864540d9b9be16bfd7f246735aa0b9a7fd4f7520535d4c44a252b6a7ea,PodSandboxId:aea2230fd7d55a900837034cca85ef7d637ce669967efa4dca3b1af7d8dcef68,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705373844959473452,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gg8kv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32841b88-1b06-46ed-b4ce-f73301
ec0a85,},Annotations:map[string]string{io.kubernetes.container.hash: 3089e760,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1623b43e55e398342cefe9d39b85cda1dc7389375c03eb0e58ae3953c9e72bb1,PodSandboxId:9eb58dc548746ebb0861393e651105b5afa99af11d999d989a3c7bd67995ee55,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705373823943534615,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-405494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a9e67d0e87fe64d9531234ab850034d,},Annotations:map[string]string{io.kubernetes
.container.hash: 56e72d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eaae4ce9add621f9432f2a01b97af9995d74e665e458a6e5354bf9342946bcfe,PodSandboxId:8f2d0e776284cadd37e5258b1bee7e9ef05742bebc8c4b052d4a4261640cc056,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705373823445509910,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-405494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65069d20830c0b10a3d28746871e48c2,},Annotations:map[string]string{io.kubernetes.container.has
h: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e0595e6d73ec6792d98147ead97a7a9dfcca5c25fd8ea3c04cf026509492ee6,PodSandboxId:07f2eb997c1bc6439f3c9177fdf72f656d65147bbce10175cb471081b30cc0f1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705373823368087770,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-405494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9eb78063d6e219f3cc5940494bdab4b2,},Annotations:map[string]string{io.
kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:537fb6a84e23775f1b592ea5040f1648a20bc4b8a721687aae38b405643997cf,PodSandboxId:bc5b232759b82e2ffc0f87c3ffb73bb2b07c01ab385b9651f709212ea745bbcf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705373823223541048,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-405494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04bffd1a6d3ee0aae068c41e37830c9b,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: aaf37b8e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4124074f-ddf2-4a2e-ba70-374f6a32c4f4 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 02:58:19 multinode-405494 crio[714]: time="2024-01-16 02:58:19.705530760Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=21c831bf-40cb-4fbd-bd3a-2a821f75f3d0 name=/runtime.v1.RuntimeService/Version
	Jan 16 02:58:19 multinode-405494 crio[714]: time="2024-01-16 02:58:19.705626127Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=21c831bf-40cb-4fbd-bd3a-2a821f75f3d0 name=/runtime.v1.RuntimeService/Version
	Jan 16 02:58:19 multinode-405494 crio[714]: time="2024-01-16 02:58:19.706828452Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=1d598f82-f52a-4705-ade9-0e22171fd702 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 02:58:19 multinode-405494 crio[714]: time="2024-01-16 02:58:19.707288929Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705373899707271891,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=1d598f82-f52a-4705-ade9-0e22171fd702 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 02:58:19 multinode-405494 crio[714]: time="2024-01-16 02:58:19.708538322Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4092f57d-e828-4b67-bfeb-e51c54c04057 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 02:58:19 multinode-405494 crio[714]: time="2024-01-16 02:58:19.708584169Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4092f57d-e828-4b67-bfeb-e51c54c04057 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 02:58:19 multinode-405494 crio[714]: time="2024-01-16 02:58:19.708787682Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1cedfc16d4a48868d3c0050c99f23cccf77d39bc5e1fe8938002eb54639230aa,PodSandboxId:710cb9e863bd30fbeb7f72cd2f47d45689f09fea049aa1f519d56b13f64eb209,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1705373895371507588,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-r9bv6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 73a7a6a1-28ed-452e-8073-025f2e1289be,},Annotations:map[string]string{io.kubernetes.container.hash: 4950379d,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9279f7c152a4b92ff8e9f0d96821d176cbc7c177fc06789227dc233cae7b9707,PodSandboxId:067fb5a119a7eac56ede22899e63ec1d798d977fb328765e2209ca501c7b0bae,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705373850195459691,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-vwqvk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 096151e2-c59c-4dcf-bd29-2029901902c9,},Annotations:map[string]string{io.kubernetes.container.hash: 7c9940b4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10aa4b900ac161e778ef103d44df239ee732a732825292585710a88df0abb3e1,PodSandboxId:2abf04dddaa5ba8a802851dd56010c72c1fe0b3ab3739d134ed26009355f4266,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705373849900639853,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: c6f12cfa-46b3-4840-a7e2-258c063a19c2,},Annotations:map[string]string{io.kubernetes.container.hash: 760f1fb4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6211ee80ab9669f7472975a29777f1fa9e315058a863684eaeb726853c0df800,PodSandboxId:fd2e2fbee0fc1c83af79d02f9bb2d6b650e3402101a68db5bf0a6b589a73cbc6,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1705373847166848946,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8t86n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 4d421823-26dd-467d-94d4-28387c8e3793,},Annotations:map[string]string{io.kubernetes.container.hash: 2e973eea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75bd73864540d9b9be16bfd7f246735aa0b9a7fd4f7520535d4c44a252b6a7ea,PodSandboxId:aea2230fd7d55a900837034cca85ef7d637ce669967efa4dca3b1af7d8dcef68,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705373844959473452,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gg8kv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32841b88-1b06-46ed-b4ce-f73301
ec0a85,},Annotations:map[string]string{io.kubernetes.container.hash: 3089e760,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1623b43e55e398342cefe9d39b85cda1dc7389375c03eb0e58ae3953c9e72bb1,PodSandboxId:9eb58dc548746ebb0861393e651105b5afa99af11d999d989a3c7bd67995ee55,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705373823943534615,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-405494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a9e67d0e87fe64d9531234ab850034d,},Annotations:map[string]string{io.kubernetes
.container.hash: 56e72d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eaae4ce9add621f9432f2a01b97af9995d74e665e458a6e5354bf9342946bcfe,PodSandboxId:8f2d0e776284cadd37e5258b1bee7e9ef05742bebc8c4b052d4a4261640cc056,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705373823445509910,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-405494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65069d20830c0b10a3d28746871e48c2,},Annotations:map[string]string{io.kubernetes.container.has
h: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e0595e6d73ec6792d98147ead97a7a9dfcca5c25fd8ea3c04cf026509492ee6,PodSandboxId:07f2eb997c1bc6439f3c9177fdf72f656d65147bbce10175cb471081b30cc0f1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705373823368087770,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-405494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9eb78063d6e219f3cc5940494bdab4b2,},Annotations:map[string]string{io.
kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:537fb6a84e23775f1b592ea5040f1648a20bc4b8a721687aae38b405643997cf,PodSandboxId:bc5b232759b82e2ffc0f87c3ffb73bb2b07c01ab385b9651f709212ea745bbcf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705373823223541048,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-405494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04bffd1a6d3ee0aae068c41e37830c9b,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: aaf37b8e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4092f57d-e828-4b67-bfeb-e51c54c04057 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	1cedfc16d4a48       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 seconds ago        Running             busybox                   0                   710cb9e863bd3       busybox-5bc68d56bd-r9bv6
	9279f7c152a4b       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      49 seconds ago       Running             coredns                   0                   067fb5a119a7e       coredns-5dd5756b68-vwqvk
	10aa4b900ac16       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      49 seconds ago       Running             storage-provisioner       0                   2abf04dddaa5b       storage-provisioner
	6211ee80ab966       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                      52 seconds ago       Running             kindnet-cni               0                   fd2e2fbee0fc1       kindnet-8t86n
	75bd73864540d       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      54 seconds ago       Running             kube-proxy                0                   aea2230fd7d55       kube-proxy-gg8kv
	1623b43e55e39       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      About a minute ago   Running             etcd                      0                   9eb58dc548746       etcd-multinode-405494
	eaae4ce9add62       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      About a minute ago   Running             kube-scheduler            0                   8f2d0e776284c       kube-scheduler-multinode-405494
	1e0595e6d73ec       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      About a minute ago   Running             kube-controller-manager   0                   07f2eb997c1bc       kube-controller-manager-multinode-405494
	537fb6a84e237       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      About a minute ago   Running             kube-apiserver            0                   bc5b232759b82       kube-apiserver-multinode-405494
	
	
	==> coredns [9279f7c152a4b92ff8e9f0d96821d176cbc7c177fc06789227dc233cae7b9707] <==
	[INFO] 10.244.1.2:32995 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000137375s
	[INFO] 10.244.0.3:43309 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000095447s
	[INFO] 10.244.0.3:56818 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002066211s
	[INFO] 10.244.0.3:60398 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000099146s
	[INFO] 10.244.0.3:34124 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000054387s
	[INFO] 10.244.0.3:35740 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001290585s
	[INFO] 10.244.0.3:35826 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000069827s
	[INFO] 10.244.0.3:50660 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000058662s
	[INFO] 10.244.0.3:36510 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000056456s
	[INFO] 10.244.1.2:42180 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000220029s
	[INFO] 10.244.1.2:51231 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000161089s
	[INFO] 10.244.1.2:59534 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000159358s
	[INFO] 10.244.1.2:54375 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000073336s
	[INFO] 10.244.0.3:43222 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000183877s
	[INFO] 10.244.0.3:43953 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00009116s
	[INFO] 10.244.0.3:49871 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000074753s
	[INFO] 10.244.0.3:52276 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000078258s
	[INFO] 10.244.1.2:43951 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000235238s
	[INFO] 10.244.1.2:44524 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000227188s
	[INFO] 10.244.1.2:59911 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000195172s
	[INFO] 10.244.1.2:54459 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000172443s
	[INFO] 10.244.0.3:46934 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000217795s
	[INFO] 10.244.0.3:45668 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000115247s
	[INFO] 10.244.0.3:57251 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000076018s
	[INFO] 10.244.0.3:34875 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000072473s
	
	
	==> describe nodes <==
	Name:               multinode-405494
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-405494
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e8fa5f64d0e7272be43ff25ed3826261f0a2578
	                    minikube.k8s.io/name=multinode-405494
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_16T02_57_12_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Jan 2024 02:57:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-405494
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Jan 2024 02:58:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Jan 2024 02:57:29 +0000   Tue, 16 Jan 2024 02:57:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Jan 2024 02:57:29 +0000   Tue, 16 Jan 2024 02:57:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Jan 2024 02:57:29 +0000   Tue, 16 Jan 2024 02:57:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Jan 2024 02:57:29 +0000   Tue, 16 Jan 2024 02:57:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.70
	  Hostname:    multinode-405494
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 5f5f8b6b5e6a46f19cf1b016b5a8fabf
	  System UUID:                5f5f8b6b-5e6a-46f1-9cf1-b016b5a8fabf
	  Boot ID:                    d18df1f1-3174-4d31-925c-3b3ccf995cf1
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-r9bv6                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6s
	  kube-system                 coredns-5dd5756b68-vwqvk                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     56s
	  kube-system                 etcd-multinode-405494                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         67s
	  kube-system                 kindnet-8t86n                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      56s
	  kube-system                 kube-apiserver-multinode-405494             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         70s
	  kube-system                 kube-controller-manager-multinode-405494    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         67s
	  kube-system                 kube-proxy-gg8kv                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         56s
	  kube-system                 kube-scheduler-multinode-405494             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         68s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 54s                kube-proxy       
	  Normal  Starting                 77s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  77s (x8 over 77s)  kubelet          Node multinode-405494 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    77s (x8 over 77s)  kubelet          Node multinode-405494 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     77s (x7 over 77s)  kubelet          Node multinode-405494 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  77s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 68s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  68s                kubelet          Node multinode-405494 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    68s                kubelet          Node multinode-405494 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     68s                kubelet          Node multinode-405494 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  68s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           57s                node-controller  Node multinode-405494 event: Registered Node multinode-405494 in Controller
	  Normal  NodeReady                50s                kubelet          Node multinode-405494 status is now: NodeReady
	
	
	Name:               multinode-405494-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-405494-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e8fa5f64d0e7272be43ff25ed3826261f0a2578
	                    minikube.k8s.io/name=multinode-405494
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_01_16T02_58_03_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Jan 2024 02:58:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-405494-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Jan 2024 02:58:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Jan 2024 02:58:10 +0000   Tue, 16 Jan 2024 02:58:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Jan 2024 02:58:10 +0000   Tue, 16 Jan 2024 02:58:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Jan 2024 02:58:10 +0000   Tue, 16 Jan 2024 02:58:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Jan 2024 02:58:10 +0000   Tue, 16 Jan 2024 02:58:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.32
	  Hostname:    multinode-405494-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 c7373742aa884a86a0dc787cc32f209c
	  System UUID:                c7373742-aa88-4a86-a0dc-787cc32f209c
	  Boot ID:                    45dcb7fd-8b98-4f9a-94df-8cc9fd5728df
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-pkhcp    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6s
	  kube-system                 kindnet-ddd2h               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      17s
	  kube-system                 kube-proxy-m46rb            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13s                kube-proxy       
	  Normal  RegisteredNode           17s                node-controller  Node multinode-405494-m02 event: Registered Node multinode-405494-m02 in Controller
	  Normal  NodeHasSufficientMemory  17s (x5 over 19s)  kubelet          Node multinode-405494-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17s (x5 over 19s)  kubelet          Node multinode-405494-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17s (x5 over 19s)  kubelet          Node multinode-405494-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                9s                 kubelet          Node multinode-405494-m02 status is now: NodeReady
	
	
	==> dmesg <==
	[Jan16 02:56] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.068294] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.413740] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.499908] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.149701] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.026473] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000014] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.016004] systemd-fstab-generator[640]: Ignoring "noauto" for root device
	[  +0.102105] systemd-fstab-generator[651]: Ignoring "noauto" for root device
	[  +0.147172] systemd-fstab-generator[664]: Ignoring "noauto" for root device
	[  +0.118277] systemd-fstab-generator[675]: Ignoring "noauto" for root device
	[  +0.237933] systemd-fstab-generator[699]: Ignoring "noauto" for root device
	[Jan16 02:57] systemd-fstab-generator[923]: Ignoring "noauto" for root device
	[  +9.795098] systemd-fstab-generator[1254]: Ignoring "noauto" for root device
	[ +19.662292] kauditd_printk_skb: 18 callbacks suppressed
	
	
	==> etcd [1623b43e55e398342cefe9d39b85cda1dc7389375c03eb0e58ae3953c9e72bb1] <==
	{"level":"info","ts":"2024-01-16T02:57:05.708663Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d9e0442f914d2c09 switched to configuration voters=(15699623272105454601)"}
	{"level":"info","ts":"2024-01-16T02:57:05.708752Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"b9ca18127a3e3182","local-member-id":"d9e0442f914d2c09","added-peer-id":"d9e0442f914d2c09","added-peer-peer-urls":["https://192.168.39.70:2380"]}
	{"level":"info","ts":"2024-01-16T02:57:05.710128Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-01-16T02:57:05.710454Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.70:2380"}
	{"level":"info","ts":"2024-01-16T02:57:05.710611Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.70:2380"}
	{"level":"info","ts":"2024-01-16T02:57:05.71159Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-01-16T02:57:05.711519Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"d9e0442f914d2c09","initial-advertise-peer-urls":["https://192.168.39.70:2380"],"listen-peer-urls":["https://192.168.39.70:2380"],"advertise-client-urls":["https://192.168.39.70:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.70:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-01-16T02:57:06.281913Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d9e0442f914d2c09 is starting a new election at term 1"}
	{"level":"info","ts":"2024-01-16T02:57:06.281976Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d9e0442f914d2c09 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-01-16T02:57:06.28201Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d9e0442f914d2c09 received MsgPreVoteResp from d9e0442f914d2c09 at term 1"}
	{"level":"info","ts":"2024-01-16T02:57:06.282024Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d9e0442f914d2c09 became candidate at term 2"}
	{"level":"info","ts":"2024-01-16T02:57:06.282031Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d9e0442f914d2c09 received MsgVoteResp from d9e0442f914d2c09 at term 2"}
	{"level":"info","ts":"2024-01-16T02:57:06.282039Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d9e0442f914d2c09 became leader at term 2"}
	{"level":"info","ts":"2024-01-16T02:57:06.282046Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d9e0442f914d2c09 elected leader d9e0442f914d2c09 at term 2"}
	{"level":"info","ts":"2024-01-16T02:57:06.283687Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-16T02:57:06.284644Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"d9e0442f914d2c09","local-member-attributes":"{Name:multinode-405494 ClientURLs:[https://192.168.39.70:2379]}","request-path":"/0/members/d9e0442f914d2c09/attributes","cluster-id":"b9ca18127a3e3182","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-16T02:57:06.284702Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-16T02:57:06.285323Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b9ca18127a3e3182","local-member-id":"d9e0442f914d2c09","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-16T02:57:06.285418Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-16T02:57:06.285455Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-16T02:57:06.286049Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-16T02:57:06.291874Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-16T02:57:06.292816Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.70:2379"}
	{"level":"info","ts":"2024-01-16T02:57:06.294285Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-16T02:57:06.294329Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 02:58:20 up 1 min,  0 users,  load average: 1.33, 0.54, 0.20
	Linux multinode-405494 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kindnet [6211ee80ab9669f7472975a29777f1fa9e315058a863684eaeb726853c0df800] <==
	I0116 02:57:28.010058       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0116 02:57:28.010371       1 main.go:107] hostIP = 192.168.39.70
	podIP = 192.168.39.70
	I0116 02:57:28.010700       1 main.go:116] setting mtu 1500 for CNI 
	I0116 02:57:28.010754       1 main.go:146] kindnetd IP family: "ipv4"
	I0116 02:57:28.010790       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0116 02:57:28.610542       1 main.go:223] Handling node with IPs: map[192.168.39.70:{}]
	I0116 02:57:28.610675       1 main.go:227] handling current node
	I0116 02:57:38.718356       1 main.go:223] Handling node with IPs: map[192.168.39.70:{}]
	I0116 02:57:38.718406       1 main.go:227] handling current node
	I0116 02:57:48.723469       1 main.go:223] Handling node with IPs: map[192.168.39.70:{}]
	I0116 02:57:48.723749       1 main.go:227] handling current node
	I0116 02:57:58.731337       1 main.go:223] Handling node with IPs: map[192.168.39.70:{}]
	I0116 02:57:58.731590       1 main.go:227] handling current node
	I0116 02:58:08.745413       1 main.go:223] Handling node with IPs: map[192.168.39.70:{}]
	I0116 02:58:08.745624       1 main.go:227] handling current node
	I0116 02:58:08.745655       1 main.go:223] Handling node with IPs: map[192.168.39.32:{}]
	I0116 02:58:08.745675       1 main.go:250] Node multinode-405494-m02 has CIDR [10.244.1.0/24] 
	I0116 02:58:08.745937       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.39.32 Flags: [] Table: 0} 
	I0116 02:58:18.752963       1 main.go:223] Handling node with IPs: map[192.168.39.70:{}]
	I0116 02:58:18.753081       1 main.go:227] handling current node
	I0116 02:58:18.753111       1 main.go:223] Handling node with IPs: map[192.168.39.32:{}]
	I0116 02:58:18.753129       1 main.go:250] Node multinode-405494-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [537fb6a84e23775f1b592ea5040f1648a20bc4b8a721687aae38b405643997cf] <==
	I0116 02:57:07.836358       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0116 02:57:07.836455       1 shared_informer.go:318] Caches are synced for configmaps
	I0116 02:57:07.840442       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0116 02:57:07.840496       1 aggregator.go:166] initial CRD sync complete...
	I0116 02:57:07.840520       1 autoregister_controller.go:141] Starting autoregister controller
	I0116 02:57:07.840541       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0116 02:57:07.840563       1 cache.go:39] Caches are synced for autoregister controller
	I0116 02:57:07.867989       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0116 02:57:07.896926       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0116 02:57:08.738594       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0116 02:57:08.752239       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0116 02:57:08.752307       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0116 02:57:09.529468       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0116 02:57:09.575491       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0116 02:57:09.663617       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0116 02:57:09.672340       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.70]
	I0116 02:57:09.673623       1 controller.go:624] quota admission added evaluator for: endpoints
	I0116 02:57:09.680496       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0116 02:57:09.820760       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0116 02:57:11.509665       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0116 02:57:11.527092       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0116 02:57:11.547368       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0116 02:57:23.328494       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0116 02:57:23.577616       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	E0116 02:58:17.284234       1 upgradeaware.go:425] Error proxying data from client to backend: write tcp 192.168.39.70:50434->192.168.39.70:10250: write: broken pipe
	
	
	==> kube-controller-manager [1e0595e6d73ec6792d98147ead97a7a9dfcca5c25fd8ea3c04cf026509492ee6] <==
	I0116 02:57:24.191112       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="67.931654ms"
	I0116 02:57:24.192404       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="105.088µs"
	I0116 02:57:29.110951       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="139.486µs"
	I0116 02:57:29.147746       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="181.101µs"
	I0116 02:57:30.992022       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="24.102361ms"
	I0116 02:57:30.992139       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="54.828µs"
	I0116 02:57:32.675241       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0116 02:58:02.441058       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-405494-m02\" does not exist"
	I0116 02:58:02.458676       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-ddd2h"
	I0116 02:58:02.465291       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-m46rb"
	I0116 02:58:02.480593       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-405494-m02" podCIDRs=["10.244.1.0/24"]
	I0116 02:58:02.681602       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-405494-m02"
	I0116 02:58:02.681959       1 event.go:307] "Event occurred" object="multinode-405494-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-405494-m02 event: Registered Node multinode-405494-m02 in Controller"
	I0116 02:58:10.775573       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-405494-m02"
	I0116 02:58:13.363717       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I0116 02:58:13.383110       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-pkhcp"
	I0116 02:58:13.394314       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-r9bv6"
	I0116 02:58:13.421047       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="56.485449ms"
	I0116 02:58:13.444372       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="22.849705ms"
	I0116 02:58:13.444495       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="58.507µs"
	I0116 02:58:13.444979       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="114.35µs"
	I0116 02:58:16.078311       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="10.704566ms"
	I0116 02:58:16.079090       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="38.474µs"
	I0116 02:58:16.128548       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="11.539109ms"
	I0116 02:58:16.128932       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="131.161µs"
	
	
	==> kube-proxy [75bd73864540d9b9be16bfd7f246735aa0b9a7fd4f7520535d4c44a252b6a7ea] <==
	I0116 02:57:25.192584       1 server_others.go:69] "Using iptables proxy"
	I0116 02:57:25.212075       1 node.go:141] Successfully retrieved node IP: 192.168.39.70
	I0116 02:57:25.256873       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0116 02:57:25.256945       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0116 02:57:25.259790       1 server_others.go:152] "Using iptables Proxier"
	I0116 02:57:25.259878       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0116 02:57:25.260571       1 server.go:846] "Version info" version="v1.28.4"
	I0116 02:57:25.260819       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0116 02:57:25.262447       1 config.go:188] "Starting service config controller"
	I0116 02:57:25.262501       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0116 02:57:25.262535       1 config.go:97] "Starting endpoint slice config controller"
	I0116 02:57:25.262551       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0116 02:57:25.264766       1 config.go:315] "Starting node config controller"
	I0116 02:57:25.264815       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0116 02:57:25.363600       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0116 02:57:25.363787       1 shared_informer.go:318] Caches are synced for service config
	I0116 02:57:25.365246       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [eaae4ce9add621f9432f2a01b97af9995d74e665e458a6e5354bf9342946bcfe] <==
	W0116 02:57:07.838242       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0116 02:57:07.838594       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0116 02:57:07.838373       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0116 02:57:07.838466       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0116 02:57:07.840362       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0116 02:57:07.840347       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0116 02:57:08.652023       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0116 02:57:08.652151       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0116 02:57:08.652273       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0116 02:57:08.652155       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0116 02:57:08.766403       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0116 02:57:08.766453       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0116 02:57:08.813981       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0116 02:57:08.814066       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0116 02:57:08.835656       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0116 02:57:08.835744       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0116 02:57:08.847031       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0116 02:57:08.847147       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0116 02:57:08.933619       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0116 02:57:08.933719       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0116 02:57:09.144546       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0116 02:57:09.144669       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0116 02:57:09.274843       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0116 02:57:09.274894       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0116 02:57:11.625729       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-01-16 02:56:37 UTC, ends at Tue 2024-01-16 02:58:20 UTC. --
	Jan 16 02:57:23 multinode-405494 kubelet[1261]: I0116 02:57:23.711103    1261 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/32841b88-1b06-46ed-b4ce-f73301ec0a85-xtables-lock\") pod \"kube-proxy-gg8kv\" (UID: \"32841b88-1b06-46ed-b4ce-f73301ec0a85\") " pod="kube-system/kube-proxy-gg8kv"
	Jan 16 02:57:23 multinode-405494 kubelet[1261]: I0116 02:57:23.711222    1261 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/32841b88-1b06-46ed-b4ce-f73301ec0a85-lib-modules\") pod \"kube-proxy-gg8kv\" (UID: \"32841b88-1b06-46ed-b4ce-f73301ec0a85\") " pod="kube-system/kube-proxy-gg8kv"
	Jan 16 02:57:23 multinode-405494 kubelet[1261]: I0116 02:57:23.711252    1261 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/4d421823-26dd-467d-94d4-28387c8e3793-cni-cfg\") pod \"kindnet-8t86n\" (UID: \"4d421823-26dd-467d-94d4-28387c8e3793\") " pod="kube-system/kindnet-8t86n"
	Jan 16 02:57:23 multinode-405494 kubelet[1261]: I0116 02:57:23.711278    1261 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4d421823-26dd-467d-94d4-28387c8e3793-lib-modules\") pod \"kindnet-8t86n\" (UID: \"4d421823-26dd-467d-94d4-28387c8e3793\") " pod="kube-system/kindnet-8t86n"
	Jan 16 02:57:23 multinode-405494 kubelet[1261]: I0116 02:57:23.711300    1261 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpzp4\" (UniqueName: \"kubernetes.io/projected/4d421823-26dd-467d-94d4-28387c8e3793-kube-api-access-mpzp4\") pod \"kindnet-8t86n\" (UID: \"4d421823-26dd-467d-94d4-28387c8e3793\") " pod="kube-system/kindnet-8t86n"
	Jan 16 02:57:23 multinode-405494 kubelet[1261]: I0116 02:57:23.711320    1261 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/32841b88-1b06-46ed-b4ce-f73301ec0a85-kube-proxy\") pod \"kube-proxy-gg8kv\" (UID: \"32841b88-1b06-46ed-b4ce-f73301ec0a85\") " pod="kube-system/kube-proxy-gg8kv"
	Jan 16 02:57:23 multinode-405494 kubelet[1261]: I0116 02:57:23.711343    1261 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4d421823-26dd-467d-94d4-28387c8e3793-xtables-lock\") pod \"kindnet-8t86n\" (UID: \"4d421823-26dd-467d-94d4-28387c8e3793\") " pod="kube-system/kindnet-8t86n"
	Jan 16 02:57:23 multinode-405494 kubelet[1261]: I0116 02:57:23.711362    1261 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hd7d4\" (UniqueName: \"kubernetes.io/projected/32841b88-1b06-46ed-b4ce-f73301ec0a85-kube-api-access-hd7d4\") pod \"kube-proxy-gg8kv\" (UID: \"32841b88-1b06-46ed-b4ce-f73301ec0a85\") " pod="kube-system/kube-proxy-gg8kv"
	Jan 16 02:57:27 multinode-405494 kubelet[1261]: I0116 02:57:27.919409    1261 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-gg8kv" podStartSLOduration=4.919355524 podCreationTimestamp="2024-01-16 02:57:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-16 02:57:25.910753805 +0000 UTC m=+14.422834472" watchObservedRunningTime="2024-01-16 02:57:27.919355524 +0000 UTC m=+16.431436195"
	Jan 16 02:57:29 multinode-405494 kubelet[1261]: I0116 02:57:29.068312    1261 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Jan 16 02:57:29 multinode-405494 kubelet[1261]: I0116 02:57:29.103493    1261 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-8t86n" podStartSLOduration=6.103449204 podCreationTimestamp="2024-01-16 02:57:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-16 02:57:27.925443072 +0000 UTC m=+16.437523739" watchObservedRunningTime="2024-01-16 02:57:29.103449204 +0000 UTC m=+17.615529868"
	Jan 16 02:57:29 multinode-405494 kubelet[1261]: I0116 02:57:29.103896    1261 topology_manager.go:215] "Topology Admit Handler" podUID="c6f12cfa-46b3-4840-a7e2-258c063a19c2" podNamespace="kube-system" podName="storage-provisioner"
	Jan 16 02:57:29 multinode-405494 kubelet[1261]: I0116 02:57:29.109853    1261 topology_manager.go:215] "Topology Admit Handler" podUID="096151e2-c59c-4dcf-bd29-2029901902c9" podNamespace="kube-system" podName="coredns-5dd5756b68-vwqvk"
	Jan 16 02:57:29 multinode-405494 kubelet[1261]: I0116 02:57:29.150065    1261 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cfrl\" (UniqueName: \"kubernetes.io/projected/c6f12cfa-46b3-4840-a7e2-258c063a19c2-kube-api-access-5cfrl\") pod \"storage-provisioner\" (UID: \"c6f12cfa-46b3-4840-a7e2-258c063a19c2\") " pod="kube-system/storage-provisioner"
	Jan 16 02:57:29 multinode-405494 kubelet[1261]: I0116 02:57:29.150115    1261 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/096151e2-c59c-4dcf-bd29-2029901902c9-config-volume\") pod \"coredns-5dd5756b68-vwqvk\" (UID: \"096151e2-c59c-4dcf-bd29-2029901902c9\") " pod="kube-system/coredns-5dd5756b68-vwqvk"
	Jan 16 02:57:29 multinode-405494 kubelet[1261]: I0116 02:57:29.150137    1261 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djtdf\" (UniqueName: \"kubernetes.io/projected/096151e2-c59c-4dcf-bd29-2029901902c9-kube-api-access-djtdf\") pod \"coredns-5dd5756b68-vwqvk\" (UID: \"096151e2-c59c-4dcf-bd29-2029901902c9\") " pod="kube-system/coredns-5dd5756b68-vwqvk"
	Jan 16 02:57:29 multinode-405494 kubelet[1261]: I0116 02:57:29.150161    1261 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/c6f12cfa-46b3-4840-a7e2-258c063a19c2-tmp\") pod \"storage-provisioner\" (UID: \"c6f12cfa-46b3-4840-a7e2-258c063a19c2\") " pod="kube-system/storage-provisioner"
	Jan 16 02:57:30 multinode-405494 kubelet[1261]: I0116 02:57:30.966745    1261 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=6.966707235 podCreationTimestamp="2024-01-16 02:57:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-16 02:57:30.949403066 +0000 UTC m=+19.461483738" watchObservedRunningTime="2024-01-16 02:57:30.966707235 +0000 UTC m=+19.478787966"
	Jan 16 02:57:30 multinode-405494 kubelet[1261]: I0116 02:57:30.966803    1261 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-vwqvk" podStartSLOduration=7.9667902569999995 podCreationTimestamp="2024-01-16 02:57:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-16 02:57:30.966775925 +0000 UTC m=+19.478856598" watchObservedRunningTime="2024-01-16 02:57:30.966790257 +0000 UTC m=+19.478870930"
	Jan 16 02:58:11 multinode-405494 kubelet[1261]: E0116 02:58:11.903114    1261 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 16 02:58:11 multinode-405494 kubelet[1261]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 16 02:58:11 multinode-405494 kubelet[1261]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 16 02:58:11 multinode-405494 kubelet[1261]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 16 02:58:13 multinode-405494 kubelet[1261]: I0116 02:58:13.406706    1261 topology_manager.go:215] "Topology Admit Handler" podUID="73a7a6a1-28ed-452e-8073-025f2e1289be" podNamespace="default" podName="busybox-5bc68d56bd-r9bv6"
	Jan 16 02:58:13 multinode-405494 kubelet[1261]: I0116 02:58:13.500682    1261 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nvjc\" (UniqueName: \"kubernetes.io/projected/73a7a6a1-28ed-452e-8073-025f2e1289be-kube-api-access-8nvjc\") pod \"busybox-5bc68d56bd-r9bv6\" (UID: \"73a7a6a1-28ed-452e-8073-025f2e1289be\") " pod="default/busybox-5bc68d56bd-r9bv6"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-405494 -n multinode-405494
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-405494 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (3.39s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (690.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-405494
multinode_test.go:318: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-405494
E0116 02:59:45.868011  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/ingress-addon-legacy-873808/client.crt: no such file or directory
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-405494: exit status 82 (2m1.029134975s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-405494"  ...
	* Stopping node "multinode-405494"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:320: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-405494" : exit status 82
multinode_test.go:323: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-405494 --wait=true -v=8 --alsologtostderr
E0116 03:01:49.160503  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/functional-193417/client.crt: no such file or directory
E0116 03:02:19.246293  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/addons-690916/client.crt: no such file or directory
E0116 03:03:42.294262  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/addons-690916/client.crt: no such file or directory
E0116 03:04:18.182753  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/ingress-addon-legacy-873808/client.crt: no such file or directory
E0116 03:06:49.160538  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/functional-193417/client.crt: no such file or directory
E0116 03:07:19.246250  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/addons-690916/client.crt: no such file or directory
E0116 03:08:12.209126  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/functional-193417/client.crt: no such file or directory
E0116 03:09:18.183311  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/ingress-addon-legacy-873808/client.crt: no such file or directory
E0116 03:10:41.228742  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/ingress-addon-legacy-873808/client.crt: no such file or directory
multinode_test.go:323: (dbg) Done: out/minikube-linux-amd64 start -p multinode-405494 --wait=true -v=8 --alsologtostderr: (9m26.613595649s)
multinode_test.go:328: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-405494
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-405494 -n multinode-405494
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-405494 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-405494 logs -n 25: (1.649788974s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-405494 ssh -n                                                                 | multinode-405494 | jenkins | v1.32.0 | 16 Jan 24 02:59 UTC | 16 Jan 24 02:59 UTC |
	|         | multinode-405494-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-405494 cp multinode-405494-m02:/home/docker/cp-test.txt                       | multinode-405494 | jenkins | v1.32.0 | 16 Jan 24 02:59 UTC | 16 Jan 24 02:59 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2786900052/001/cp-test_multinode-405494-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-405494 ssh -n                                                                 | multinode-405494 | jenkins | v1.32.0 | 16 Jan 24 02:59 UTC | 16 Jan 24 02:59 UTC |
	|         | multinode-405494-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-405494 cp multinode-405494-m02:/home/docker/cp-test.txt                       | multinode-405494 | jenkins | v1.32.0 | 16 Jan 24 02:59 UTC | 16 Jan 24 02:59 UTC |
	|         | multinode-405494:/home/docker/cp-test_multinode-405494-m02_multinode-405494.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-405494 ssh -n                                                                 | multinode-405494 | jenkins | v1.32.0 | 16 Jan 24 02:59 UTC | 16 Jan 24 02:59 UTC |
	|         | multinode-405494-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-405494 ssh -n multinode-405494 sudo cat                                       | multinode-405494 | jenkins | v1.32.0 | 16 Jan 24 02:59 UTC | 16 Jan 24 02:59 UTC |
	|         | /home/docker/cp-test_multinode-405494-m02_multinode-405494.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-405494 cp multinode-405494-m02:/home/docker/cp-test.txt                       | multinode-405494 | jenkins | v1.32.0 | 16 Jan 24 02:59 UTC | 16 Jan 24 02:59 UTC |
	|         | multinode-405494-m03:/home/docker/cp-test_multinode-405494-m02_multinode-405494-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-405494 ssh -n                                                                 | multinode-405494 | jenkins | v1.32.0 | 16 Jan 24 02:59 UTC | 16 Jan 24 02:59 UTC |
	|         | multinode-405494-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-405494 ssh -n multinode-405494-m03 sudo cat                                   | multinode-405494 | jenkins | v1.32.0 | 16 Jan 24 02:59 UTC | 16 Jan 24 02:59 UTC |
	|         | /home/docker/cp-test_multinode-405494-m02_multinode-405494-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-405494 cp testdata/cp-test.txt                                                | multinode-405494 | jenkins | v1.32.0 | 16 Jan 24 02:59 UTC | 16 Jan 24 02:59 UTC |
	|         | multinode-405494-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-405494 ssh -n                                                                 | multinode-405494 | jenkins | v1.32.0 | 16 Jan 24 02:59 UTC | 16 Jan 24 02:59 UTC |
	|         | multinode-405494-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-405494 cp multinode-405494-m03:/home/docker/cp-test.txt                       | multinode-405494 | jenkins | v1.32.0 | 16 Jan 24 02:59 UTC | 16 Jan 24 02:59 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2786900052/001/cp-test_multinode-405494-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-405494 ssh -n                                                                 | multinode-405494 | jenkins | v1.32.0 | 16 Jan 24 02:59 UTC | 16 Jan 24 02:59 UTC |
	|         | multinode-405494-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-405494 cp multinode-405494-m03:/home/docker/cp-test.txt                       | multinode-405494 | jenkins | v1.32.0 | 16 Jan 24 02:59 UTC | 16 Jan 24 02:59 UTC |
	|         | multinode-405494:/home/docker/cp-test_multinode-405494-m03_multinode-405494.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-405494 ssh -n                                                                 | multinode-405494 | jenkins | v1.32.0 | 16 Jan 24 02:59 UTC | 16 Jan 24 02:59 UTC |
	|         | multinode-405494-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-405494 ssh -n multinode-405494 sudo cat                                       | multinode-405494 | jenkins | v1.32.0 | 16 Jan 24 02:59 UTC | 16 Jan 24 02:59 UTC |
	|         | /home/docker/cp-test_multinode-405494-m03_multinode-405494.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-405494 cp multinode-405494-m03:/home/docker/cp-test.txt                       | multinode-405494 | jenkins | v1.32.0 | 16 Jan 24 02:59 UTC | 16 Jan 24 02:59 UTC |
	|         | multinode-405494-m02:/home/docker/cp-test_multinode-405494-m03_multinode-405494-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-405494 ssh -n                                                                 | multinode-405494 | jenkins | v1.32.0 | 16 Jan 24 02:59 UTC | 16 Jan 24 02:59 UTC |
	|         | multinode-405494-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-405494 ssh -n multinode-405494-m02 sudo cat                                   | multinode-405494 | jenkins | v1.32.0 | 16 Jan 24 02:59 UTC | 16 Jan 24 02:59 UTC |
	|         | /home/docker/cp-test_multinode-405494-m03_multinode-405494-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-405494 node stop m03                                                          | multinode-405494 | jenkins | v1.32.0 | 16 Jan 24 02:59 UTC | 16 Jan 24 02:59 UTC |
	| node    | multinode-405494 node start                                                             | multinode-405494 | jenkins | v1.32.0 | 16 Jan 24 02:59 UTC | 16 Jan 24 02:59 UTC |
	|         | m03 --alsologtostderr                                                                   |                  |         |         |                     |                     |
	| node    | list -p multinode-405494                                                                | multinode-405494 | jenkins | v1.32.0 | 16 Jan 24 02:59 UTC |                     |
	| stop    | -p multinode-405494                                                                     | multinode-405494 | jenkins | v1.32.0 | 16 Jan 24 02:59 UTC |                     |
	| start   | -p multinode-405494                                                                     | multinode-405494 | jenkins | v1.32.0 | 16 Jan 24 03:01 UTC | 16 Jan 24 03:11 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-405494                                                                | multinode-405494 | jenkins | v1.32.0 | 16 Jan 24 03:11 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/16 03:01:43
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0116 03:01:43.650991  491150 out.go:296] Setting OutFile to fd 1 ...
	I0116 03:01:43.651272  491150 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:01:43.651281  491150 out.go:309] Setting ErrFile to fd 2...
	I0116 03:01:43.651286  491150 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:01:43.651466  491150 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17965-468241/.minikube/bin
	I0116 03:01:43.652126  491150 out.go:303] Setting JSON to false
	I0116 03:01:43.653139  491150 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":13456,"bootTime":1705360648,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0116 03:01:43.653214  491150 start.go:138] virtualization: kvm guest
	I0116 03:01:43.655809  491150 out.go:177] * [multinode-405494] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0116 03:01:43.657526  491150 out.go:177]   - MINIKUBE_LOCATION=17965
	I0116 03:01:43.657592  491150 notify.go:220] Checking for updates...
	I0116 03:01:43.659214  491150 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 03:01:43.660919  491150 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17965-468241/kubeconfig
	I0116 03:01:43.662436  491150 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17965-468241/.minikube
	I0116 03:01:43.663963  491150 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0116 03:01:43.665495  491150 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0116 03:01:43.667470  491150 config.go:182] Loaded profile config "multinode-405494": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 03:01:43.667568  491150 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 03:01:43.668084  491150 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:01:43.668139  491150 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:01:43.683307  491150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43113
	I0116 03:01:43.683805  491150 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:01:43.684393  491150 main.go:141] libmachine: Using API Version  1
	I0116 03:01:43.684417  491150 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:01:43.684784  491150 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:01:43.684987  491150 main.go:141] libmachine: (multinode-405494) Calling .DriverName
	I0116 03:01:43.723008  491150 out.go:177] * Using the kvm2 driver based on existing profile
	I0116 03:01:43.724595  491150 start.go:298] selected driver: kvm2
	I0116 03:01:43.724616  491150 start.go:902] validating driver "kvm2" against &{Name:multinode-405494 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.4 ClusterName:multinode-405494 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.70 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.32 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.182 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false
ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 03:01:43.724787  491150 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0116 03:01:43.725175  491150 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 03:01:43.725268  491150 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17965-468241/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0116 03:01:43.741196  491150 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0116 03:01:43.742395  491150 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0116 03:01:43.742498  491150 cni.go:84] Creating CNI manager for ""
	I0116 03:01:43.742534  491150 cni.go:136] 3 nodes found, recommending kindnet
	I0116 03:01:43.742556  491150 start_flags.go:321] config:
	{Name:multinode-405494 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-405494 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.70 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.32 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.182 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-prov
isioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 03:01:43.742926  491150 iso.go:125] acquiring lock: {Name:mk2f3231f0eeeb23816ac363851489181398d1c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 03:01:43.745734  491150 out.go:177] * Starting control plane node multinode-405494 in cluster multinode-405494
	I0116 03:01:43.747131  491150 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 03:01:43.747213  491150 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0116 03:01:43.747230  491150 cache.go:56] Caching tarball of preloaded images
	I0116 03:01:43.747320  491150 preload.go:174] Found /home/jenkins/minikube-integration/17965-468241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0116 03:01:43.747332  491150 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0116 03:01:43.747507  491150 profile.go:148] Saving config to /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/multinode-405494/config.json ...
	I0116 03:01:43.747738  491150 start.go:365] acquiring machines lock for multinode-405494: {Name:mk901e5fae8c1a578d989b520053c709bdbdcf06 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0116 03:01:43.747813  491150 start.go:369] acquired machines lock for "multinode-405494" in 41.29µs
	I0116 03:01:43.747837  491150 start.go:96] Skipping create...Using existing machine configuration
	I0116 03:01:43.747849  491150 fix.go:54] fixHost starting: 
	I0116 03:01:43.748202  491150 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:01:43.748255  491150 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:01:43.763086  491150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45407
	I0116 03:01:43.763555  491150 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:01:43.764068  491150 main.go:141] libmachine: Using API Version  1
	I0116 03:01:43.764098  491150 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:01:43.764481  491150 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:01:43.764675  491150 main.go:141] libmachine: (multinode-405494) Calling .DriverName
	I0116 03:01:43.764813  491150 main.go:141] libmachine: (multinode-405494) Calling .GetState
	I0116 03:01:43.766399  491150 fix.go:102] recreateIfNeeded on multinode-405494: state=Running err=<nil>
	W0116 03:01:43.766416  491150 fix.go:128] unexpected machine state, will restart: <nil>
	I0116 03:01:43.769293  491150 out.go:177] * Updating the running kvm2 "multinode-405494" VM ...
	I0116 03:01:43.770705  491150 machine.go:88] provisioning docker machine ...
	I0116 03:01:43.770726  491150 main.go:141] libmachine: (multinode-405494) Calling .DriverName
	I0116 03:01:43.770971  491150 main.go:141] libmachine: (multinode-405494) Calling .GetMachineName
	I0116 03:01:43.771116  491150 buildroot.go:166] provisioning hostname "multinode-405494"
	I0116 03:01:43.771139  491150 main.go:141] libmachine: (multinode-405494) Calling .GetMachineName
	I0116 03:01:43.771236  491150 main.go:141] libmachine: (multinode-405494) Calling .GetSSHHostname
	I0116 03:01:43.773614  491150 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 03:01:43.774133  491150 main.go:141] libmachine: (multinode-405494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:49:7b", ip: ""} in network mk-multinode-405494: {Iface:virbr1 ExpiryTime:2024-01-16 03:56:41 +0000 UTC Type:0 Mac:52:54:00:b0:49:7b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:multinode-405494 Clientid:01:52:54:00:b0:49:7b}
	I0116 03:01:43.774168  491150 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined IP address 192.168.39.70 and MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 03:01:43.774286  491150 main.go:141] libmachine: (multinode-405494) Calling .GetSSHPort
	I0116 03:01:43.774454  491150 main.go:141] libmachine: (multinode-405494) Calling .GetSSHKeyPath
	I0116 03:01:43.774589  491150 main.go:141] libmachine: (multinode-405494) Calling .GetSSHKeyPath
	I0116 03:01:43.774766  491150 main.go:141] libmachine: (multinode-405494) Calling .GetSSHUsername
	I0116 03:01:43.774912  491150 main.go:141] libmachine: Using SSH client type: native
	I0116 03:01:43.775262  491150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I0116 03:01:43.775277  491150 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-405494 && echo "multinode-405494" | sudo tee /etc/hostname
	I0116 03:02:02.140333  491150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I0116 03:02:08.220407  491150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I0116 03:02:11.292395  491150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I0116 03:02:17.372416  491150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I0116 03:02:20.444397  491150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I0116 03:02:26.524420  491150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I0116 03:02:29.596358  491150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I0116 03:02:35.676440  491150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I0116 03:02:38.748425  491150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I0116 03:02:44.828353  491150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I0116 03:02:47.900366  491150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I0116 03:02:53.980446  491150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I0116 03:02:57.052346  491150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I0116 03:03:03.132462  491150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I0116 03:03:06.204396  491150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I0116 03:03:12.284353  491150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I0116 03:03:15.356311  491150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I0116 03:03:21.436433  491150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I0116 03:03:24.508306  491150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I0116 03:03:30.588399  491150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I0116 03:03:33.660460  491150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I0116 03:03:39.740363  491150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I0116 03:03:42.812398  491150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I0116 03:03:48.892415  491150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I0116 03:03:51.964435  491150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I0116 03:03:58.044363  491150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I0116 03:04:01.116409  491150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I0116 03:04:07.196363  491150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I0116 03:04:10.268440  491150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I0116 03:04:16.348388  491150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I0116 03:04:19.420318  491150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I0116 03:04:25.500417  491150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I0116 03:04:28.572315  491150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I0116 03:04:34.652407  491150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I0116 03:04:37.724434  491150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I0116 03:04:43.804387  491150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I0116 03:04:46.876328  491150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I0116 03:04:52.956345  491150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I0116 03:04:56.028373  491150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I0116 03:05:02.108401  491150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I0116 03:05:05.180424  491150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I0116 03:05:11.260423  491150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I0116 03:05:14.332315  491150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I0116 03:05:20.412439  491150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I0116 03:05:23.484339  491150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I0116 03:05:29.564411  491150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I0116 03:05:32.636350  491150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I0116 03:05:38.716380  491150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I0116 03:05:41.788448  491150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I0116 03:05:47.868383  491150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I0116 03:05:50.940379  491150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I0116 03:05:57.020396  491150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I0116 03:06:00.092298  491150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I0116 03:06:06.172365  491150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I0116 03:06:09.244340  491150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I0116 03:06:15.324349  491150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I0116 03:06:18.396354  491150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I0116 03:06:24.476372  491150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I0116 03:06:27.548460  491150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I0116 03:06:33.628385  491150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I0116 03:06:36.631192  491150 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 03:06:36.631263  491150 main.go:141] libmachine: (multinode-405494) Calling .GetSSHHostname
	I0116 03:06:36.633552  491150 machine.go:91] provisioned docker machine in 4m52.862827172s
	I0116 03:06:36.633601  491150 fix.go:56] fixHost completed within 4m52.885753591s
	I0116 03:06:36.633608  491150 start.go:83] releasing machines lock for "multinode-405494", held for 4m52.885780487s
	W0116 03:06:36.633626  491150 start.go:694] error starting host: provision: host is not running
	W0116 03:06:36.633791  491150 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0116 03:06:36.633801  491150 start.go:709] Will try again in 5 seconds ...
	I0116 03:06:41.636021  491150 start.go:365] acquiring machines lock for multinode-405494: {Name:mk901e5fae8c1a578d989b520053c709bdbdcf06 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0116 03:06:41.636174  491150 start.go:369] acquired machines lock for "multinode-405494" in 84.496µs
	I0116 03:06:41.636215  491150 start.go:96] Skipping create...Using existing machine configuration
	I0116 03:06:41.636227  491150 fix.go:54] fixHost starting: 
	I0116 03:06:41.636563  491150 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:06:41.636588  491150 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:06:41.652309  491150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43573
	I0116 03:06:41.652869  491150 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:06:41.653428  491150 main.go:141] libmachine: Using API Version  1
	I0116 03:06:41.653460  491150 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:06:41.653819  491150 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:06:41.654048  491150 main.go:141] libmachine: (multinode-405494) Calling .DriverName
	I0116 03:06:41.654249  491150 main.go:141] libmachine: (multinode-405494) Calling .GetState
	I0116 03:06:41.656154  491150 fix.go:102] recreateIfNeeded on multinode-405494: state=Stopped err=<nil>
	I0116 03:06:41.656175  491150 main.go:141] libmachine: (multinode-405494) Calling .DriverName
	W0116 03:06:41.656425  491150 fix.go:128] unexpected machine state, will restart: <nil>
	I0116 03:06:41.658745  491150 out.go:177] * Restarting existing kvm2 VM for "multinode-405494" ...
	I0116 03:06:41.661063  491150 main.go:141] libmachine: (multinode-405494) Calling .Start
	I0116 03:06:41.661291  491150 main.go:141] libmachine: (multinode-405494) Ensuring networks are active...
	I0116 03:06:41.662363  491150 main.go:141] libmachine: (multinode-405494) Ensuring network default is active
	I0116 03:06:41.662783  491150 main.go:141] libmachine: (multinode-405494) Ensuring network mk-multinode-405494 is active
	I0116 03:06:41.663145  491150 main.go:141] libmachine: (multinode-405494) Getting domain xml...
	I0116 03:06:41.663942  491150 main.go:141] libmachine: (multinode-405494) Creating domain...
	I0116 03:06:42.008597  491150 main.go:141] libmachine: (multinode-405494) Waiting to get IP...
	I0116 03:06:42.009621  491150 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 03:06:42.010259  491150 main.go:141] libmachine: (multinode-405494) DBG | unable to find current IP address of domain multinode-405494 in network mk-multinode-405494
	I0116 03:06:42.010348  491150 main.go:141] libmachine: (multinode-405494) DBG | I0116 03:06:42.010218  491919 retry.go:31] will retry after 268.312894ms: waiting for machine to come up
	I0116 03:06:42.279874  491150 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 03:06:42.280357  491150 main.go:141] libmachine: (multinode-405494) DBG | unable to find current IP address of domain multinode-405494 in network mk-multinode-405494
	I0116 03:06:42.280385  491150 main.go:141] libmachine: (multinode-405494) DBG | I0116 03:06:42.280298  491919 retry.go:31] will retry after 265.830761ms: waiting for machine to come up
	I0116 03:06:42.547920  491150 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 03:06:42.548338  491150 main.go:141] libmachine: (multinode-405494) DBG | unable to find current IP address of domain multinode-405494 in network mk-multinode-405494
	I0116 03:06:42.548376  491150 main.go:141] libmachine: (multinode-405494) DBG | I0116 03:06:42.548286  491919 retry.go:31] will retry after 355.722664ms: waiting for machine to come up
	I0116 03:06:42.905861  491150 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 03:06:42.906344  491150 main.go:141] libmachine: (multinode-405494) DBG | unable to find current IP address of domain multinode-405494 in network mk-multinode-405494
	I0116 03:06:42.906381  491150 main.go:141] libmachine: (multinode-405494) DBG | I0116 03:06:42.906286  491919 retry.go:31] will retry after 604.891535ms: waiting for machine to come up
	I0116 03:06:43.513456  491150 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 03:06:43.513962  491150 main.go:141] libmachine: (multinode-405494) DBG | unable to find current IP address of domain multinode-405494 in network mk-multinode-405494
	I0116 03:06:43.513983  491150 main.go:141] libmachine: (multinode-405494) DBG | I0116 03:06:43.513925  491919 retry.go:31] will retry after 584.174102ms: waiting for machine to come up
	I0116 03:06:44.099762  491150 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 03:06:44.100410  491150 main.go:141] libmachine: (multinode-405494) DBG | unable to find current IP address of domain multinode-405494 in network mk-multinode-405494
	I0116 03:06:44.100460  491150 main.go:141] libmachine: (multinode-405494) DBG | I0116 03:06:44.100338  491919 retry.go:31] will retry after 593.733381ms: waiting for machine to come up
	I0116 03:06:44.696204  491150 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 03:06:44.696654  491150 main.go:141] libmachine: (multinode-405494) DBG | unable to find current IP address of domain multinode-405494 in network mk-multinode-405494
	I0116 03:06:44.696678  491150 main.go:141] libmachine: (multinode-405494) DBG | I0116 03:06:44.696590  491919 retry.go:31] will retry after 905.338774ms: waiting for machine to come up
	I0116 03:06:45.603932  491150 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 03:06:45.604437  491150 main.go:141] libmachine: (multinode-405494) DBG | unable to find current IP address of domain multinode-405494 in network mk-multinode-405494
	I0116 03:06:45.604463  491150 main.go:141] libmachine: (multinode-405494) DBG | I0116 03:06:45.604383  491919 retry.go:31] will retry after 1.005840013s: waiting for machine to come up
	I0116 03:06:46.611560  491150 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 03:06:46.612169  491150 main.go:141] libmachine: (multinode-405494) DBG | unable to find current IP address of domain multinode-405494 in network mk-multinode-405494
	I0116 03:06:46.612196  491150 main.go:141] libmachine: (multinode-405494) DBG | I0116 03:06:46.612089  491919 retry.go:31] will retry after 1.632915106s: waiting for machine to come up
	I0116 03:06:48.247647  491150 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 03:06:48.248226  491150 main.go:141] libmachine: (multinode-405494) DBG | unable to find current IP address of domain multinode-405494 in network mk-multinode-405494
	I0116 03:06:48.248265  491150 main.go:141] libmachine: (multinode-405494) DBG | I0116 03:06:48.248189  491919 retry.go:31] will retry after 1.897848219s: waiting for machine to come up
	I0116 03:06:50.148594  491150 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 03:06:50.149030  491150 main.go:141] libmachine: (multinode-405494) DBG | unable to find current IP address of domain multinode-405494 in network mk-multinode-405494
	I0116 03:06:50.149055  491150 main.go:141] libmachine: (multinode-405494) DBG | I0116 03:06:50.148998  491919 retry.go:31] will retry after 1.760073613s: waiting for machine to come up
	I0116 03:06:51.910803  491150 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 03:06:51.911327  491150 main.go:141] libmachine: (multinode-405494) DBG | unable to find current IP address of domain multinode-405494 in network mk-multinode-405494
	I0116 03:06:51.911376  491150 main.go:141] libmachine: (multinode-405494) DBG | I0116 03:06:51.911242  491919 retry.go:31] will retry after 2.92176548s: waiting for machine to come up
	I0116 03:06:54.834409  491150 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 03:06:54.834930  491150 main.go:141] libmachine: (multinode-405494) DBG | unable to find current IP address of domain multinode-405494 in network mk-multinode-405494
	I0116 03:06:54.834966  491150 main.go:141] libmachine: (multinode-405494) DBG | I0116 03:06:54.834860  491919 retry.go:31] will retry after 2.992980676s: waiting for machine to come up
	I0116 03:06:57.829102  491150 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 03:06:57.829570  491150 main.go:141] libmachine: (multinode-405494) DBG | unable to find current IP address of domain multinode-405494 in network mk-multinode-405494
	I0116 03:06:57.829594  491150 main.go:141] libmachine: (multinode-405494) DBG | I0116 03:06:57.829537  491919 retry.go:31] will retry after 5.172345661s: waiting for machine to come up
	I0116 03:07:03.003698  491150 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 03:07:03.004420  491150 main.go:141] libmachine: (multinode-405494) Found IP for machine: 192.168.39.70
	I0116 03:07:03.004456  491150 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has current primary IP address 192.168.39.70 and MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 03:07:03.004471  491150 main.go:141] libmachine: (multinode-405494) Reserving static IP address...
	I0116 03:07:03.004971  491150 main.go:141] libmachine: (multinode-405494) Reserved static IP address: 192.168.39.70
	I0116 03:07:03.005005  491150 main.go:141] libmachine: (multinode-405494) DBG | found host DHCP lease matching {name: "multinode-405494", mac: "52:54:00:b0:49:7b", ip: "192.168.39.70"} in network mk-multinode-405494: {Iface:virbr1 ExpiryTime:2024-01-16 04:06:53 +0000 UTC Type:0 Mac:52:54:00:b0:49:7b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:multinode-405494 Clientid:01:52:54:00:b0:49:7b}
	I0116 03:07:03.005020  491150 main.go:141] libmachine: (multinode-405494) Waiting for SSH to be available...
	I0116 03:07:03.005062  491150 main.go:141] libmachine: (multinode-405494) DBG | skip adding static IP to network mk-multinode-405494 - found existing host DHCP lease matching {name: "multinode-405494", mac: "52:54:00:b0:49:7b", ip: "192.168.39.70"}
	I0116 03:07:03.005088  491150 main.go:141] libmachine: (multinode-405494) DBG | Getting to WaitForSSH function...
	I0116 03:07:03.007130  491150 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 03:07:03.007487  491150 main.go:141] libmachine: (multinode-405494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:49:7b", ip: ""} in network mk-multinode-405494: {Iface:virbr1 ExpiryTime:2024-01-16 04:06:53 +0000 UTC Type:0 Mac:52:54:00:b0:49:7b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:multinode-405494 Clientid:01:52:54:00:b0:49:7b}
	I0116 03:07:03.007519  491150 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined IP address 192.168.39.70 and MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 03:07:03.007637  491150 main.go:141] libmachine: (multinode-405494) DBG | Using SSH client type: external
	I0116 03:07:03.007683  491150 main.go:141] libmachine: (multinode-405494) DBG | Using SSH private key: /home/jenkins/minikube-integration/17965-468241/.minikube/machines/multinode-405494/id_rsa (-rw-------)
	I0116 03:07:03.007713  491150 main.go:141] libmachine: (multinode-405494) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.70 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17965-468241/.minikube/machines/multinode-405494/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0116 03:07:03.007728  491150 main.go:141] libmachine: (multinode-405494) DBG | About to run SSH command:
	I0116 03:07:03.007737  491150 main.go:141] libmachine: (multinode-405494) DBG | exit 0
	I0116 03:07:03.100287  491150 main.go:141] libmachine: (multinode-405494) DBG | SSH cmd err, output: <nil>: 
	I0116 03:07:03.100669  491150 main.go:141] libmachine: (multinode-405494) Calling .GetConfigRaw
	I0116 03:07:03.101382  491150 main.go:141] libmachine: (multinode-405494) Calling .GetIP
	I0116 03:07:03.104257  491150 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 03:07:03.104701  491150 main.go:141] libmachine: (multinode-405494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:49:7b", ip: ""} in network mk-multinode-405494: {Iface:virbr1 ExpiryTime:2024-01-16 04:06:53 +0000 UTC Type:0 Mac:52:54:00:b0:49:7b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:multinode-405494 Clientid:01:52:54:00:b0:49:7b}
	I0116 03:07:03.104732  491150 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined IP address 192.168.39.70 and MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 03:07:03.105086  491150 profile.go:148] Saving config to /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/multinode-405494/config.json ...
	I0116 03:07:03.105311  491150 machine.go:88] provisioning docker machine ...
	I0116 03:07:03.105332  491150 main.go:141] libmachine: (multinode-405494) Calling .DriverName
	I0116 03:07:03.105593  491150 main.go:141] libmachine: (multinode-405494) Calling .GetMachineName
	I0116 03:07:03.105782  491150 buildroot.go:166] provisioning hostname "multinode-405494"
	I0116 03:07:03.105801  491150 main.go:141] libmachine: (multinode-405494) Calling .GetMachineName
	I0116 03:07:03.105963  491150 main.go:141] libmachine: (multinode-405494) Calling .GetSSHHostname
	I0116 03:07:03.108332  491150 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 03:07:03.108753  491150 main.go:141] libmachine: (multinode-405494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:49:7b", ip: ""} in network mk-multinode-405494: {Iface:virbr1 ExpiryTime:2024-01-16 04:06:53 +0000 UTC Type:0 Mac:52:54:00:b0:49:7b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:multinode-405494 Clientid:01:52:54:00:b0:49:7b}
	I0116 03:07:03.108788  491150 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined IP address 192.168.39.70 and MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 03:07:03.108879  491150 main.go:141] libmachine: (multinode-405494) Calling .GetSSHPort
	I0116 03:07:03.109093  491150 main.go:141] libmachine: (multinode-405494) Calling .GetSSHKeyPath
	I0116 03:07:03.109267  491150 main.go:141] libmachine: (multinode-405494) Calling .GetSSHKeyPath
	I0116 03:07:03.109373  491150 main.go:141] libmachine: (multinode-405494) Calling .GetSSHUsername
	I0116 03:07:03.109534  491150 main.go:141] libmachine: Using SSH client type: native
	I0116 03:07:03.109924  491150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I0116 03:07:03.109937  491150 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-405494 && echo "multinode-405494" | sudo tee /etc/hostname
	I0116 03:07:03.249597  491150 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-405494
	
	I0116 03:07:03.249638  491150 main.go:141] libmachine: (multinode-405494) Calling .GetSSHHostname
	I0116 03:07:03.252972  491150 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 03:07:03.253422  491150 main.go:141] libmachine: (multinode-405494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:49:7b", ip: ""} in network mk-multinode-405494: {Iface:virbr1 ExpiryTime:2024-01-16 04:06:53 +0000 UTC Type:0 Mac:52:54:00:b0:49:7b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:multinode-405494 Clientid:01:52:54:00:b0:49:7b}
	I0116 03:07:03.253463  491150 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined IP address 192.168.39.70 and MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 03:07:03.253729  491150 main.go:141] libmachine: (multinode-405494) Calling .GetSSHPort
	I0116 03:07:03.254033  491150 main.go:141] libmachine: (multinode-405494) Calling .GetSSHKeyPath
	I0116 03:07:03.254249  491150 main.go:141] libmachine: (multinode-405494) Calling .GetSSHKeyPath
	I0116 03:07:03.254461  491150 main.go:141] libmachine: (multinode-405494) Calling .GetSSHUsername
	I0116 03:07:03.254716  491150 main.go:141] libmachine: Using SSH client type: native
	I0116 03:07:03.255036  491150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I0116 03:07:03.255053  491150 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-405494' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-405494/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-405494' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 03:07:03.388811  491150 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 03:07:03.388849  491150 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17965-468241/.minikube CaCertPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17965-468241/.minikube}
	I0116 03:07:03.388875  491150 buildroot.go:174] setting up certificates
	I0116 03:07:03.388892  491150 provision.go:83] configureAuth start
	I0116 03:07:03.388910  491150 main.go:141] libmachine: (multinode-405494) Calling .GetMachineName
	I0116 03:07:03.389226  491150 main.go:141] libmachine: (multinode-405494) Calling .GetIP
	I0116 03:07:03.391848  491150 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 03:07:03.392268  491150 main.go:141] libmachine: (multinode-405494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:49:7b", ip: ""} in network mk-multinode-405494: {Iface:virbr1 ExpiryTime:2024-01-16 04:06:53 +0000 UTC Type:0 Mac:52:54:00:b0:49:7b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:multinode-405494 Clientid:01:52:54:00:b0:49:7b}
	I0116 03:07:03.392294  491150 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined IP address 192.168.39.70 and MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 03:07:03.392458  491150 main.go:141] libmachine: (multinode-405494) Calling .GetSSHHostname
	I0116 03:07:03.394775  491150 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 03:07:03.395141  491150 main.go:141] libmachine: (multinode-405494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:49:7b", ip: ""} in network mk-multinode-405494: {Iface:virbr1 ExpiryTime:2024-01-16 04:06:53 +0000 UTC Type:0 Mac:52:54:00:b0:49:7b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:multinode-405494 Clientid:01:52:54:00:b0:49:7b}
	I0116 03:07:03.395190  491150 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined IP address 192.168.39.70 and MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 03:07:03.395301  491150 provision.go:138] copyHostCerts
	I0116 03:07:03.395373  491150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem
	I0116 03:07:03.395424  491150 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem, removing ...
	I0116 03:07:03.395434  491150 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem
	I0116 03:07:03.395498  491150 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem (1078 bytes)
	I0116 03:07:03.395609  491150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem
	I0116 03:07:03.395630  491150 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem, removing ...
	I0116 03:07:03.395635  491150 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem
	I0116 03:07:03.395661  491150 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem (1123 bytes)
	I0116 03:07:03.395760  491150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem
	I0116 03:07:03.395778  491150 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem, removing ...
	I0116 03:07:03.395784  491150 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem
	I0116 03:07:03.395806  491150 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem (1679 bytes)
	I0116 03:07:03.395872  491150 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca-key.pem org=jenkins.multinode-405494 san=[192.168.39.70 192.168.39.70 localhost 127.0.0.1 minikube multinode-405494]
	I0116 03:07:03.506689  491150 provision.go:172] copyRemoteCerts
	I0116 03:07:03.506759  491150 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 03:07:03.506787  491150 main.go:141] libmachine: (multinode-405494) Calling .GetSSHHostname
	I0116 03:07:03.509551  491150 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 03:07:03.509996  491150 main.go:141] libmachine: (multinode-405494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:49:7b", ip: ""} in network mk-multinode-405494: {Iface:virbr1 ExpiryTime:2024-01-16 04:06:53 +0000 UTC Type:0 Mac:52:54:00:b0:49:7b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:multinode-405494 Clientid:01:52:54:00:b0:49:7b}
	I0116 03:07:03.510033  491150 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined IP address 192.168.39.70 and MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 03:07:03.510217  491150 main.go:141] libmachine: (multinode-405494) Calling .GetSSHPort
	I0116 03:07:03.510409  491150 main.go:141] libmachine: (multinode-405494) Calling .GetSSHKeyPath
	I0116 03:07:03.510563  491150 main.go:141] libmachine: (multinode-405494) Calling .GetSSHUsername
	I0116 03:07:03.510757  491150 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/multinode-405494/id_rsa Username:docker}
	I0116 03:07:03.602706  491150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0116 03:07:03.602806  491150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0116 03:07:03.627081  491150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0116 03:07:03.627169  491150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0116 03:07:03.649855  491150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0116 03:07:03.649954  491150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0116 03:07:03.674918  491150 provision.go:86] duration metric: configureAuth took 286.004524ms
	I0116 03:07:03.674956  491150 buildroot.go:189] setting minikube options for container-runtime
	I0116 03:07:03.675262  491150 config.go:182] Loaded profile config "multinode-405494": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 03:07:03.675353  491150 main.go:141] libmachine: (multinode-405494) Calling .GetSSHHostname
	I0116 03:07:03.678655  491150 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 03:07:03.679093  491150 main.go:141] libmachine: (multinode-405494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:49:7b", ip: ""} in network mk-multinode-405494: {Iface:virbr1 ExpiryTime:2024-01-16 04:06:53 +0000 UTC Type:0 Mac:52:54:00:b0:49:7b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:multinode-405494 Clientid:01:52:54:00:b0:49:7b}
	I0116 03:07:03.679125  491150 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined IP address 192.168.39.70 and MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 03:07:03.679320  491150 main.go:141] libmachine: (multinode-405494) Calling .GetSSHPort
	I0116 03:07:03.679574  491150 main.go:141] libmachine: (multinode-405494) Calling .GetSSHKeyPath
	I0116 03:07:03.679802  491150 main.go:141] libmachine: (multinode-405494) Calling .GetSSHKeyPath
	I0116 03:07:03.679949  491150 main.go:141] libmachine: (multinode-405494) Calling .GetSSHUsername
	I0116 03:07:03.680157  491150 main.go:141] libmachine: Using SSH client type: native
	I0116 03:07:03.680503  491150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I0116 03:07:03.680536  491150 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 03:07:04.011181  491150 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 03:07:04.011226  491150 machine.go:91] provisioned docker machine in 905.893136ms
	I0116 03:07:04.011238  491150 start.go:300] post-start starting for "multinode-405494" (driver="kvm2")
	I0116 03:07:04.011255  491150 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 03:07:04.011282  491150 main.go:141] libmachine: (multinode-405494) Calling .DriverName
	I0116 03:07:04.011698  491150 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 03:07:04.011739  491150 main.go:141] libmachine: (multinode-405494) Calling .GetSSHHostname
	I0116 03:07:04.014934  491150 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 03:07:04.015438  491150 main.go:141] libmachine: (multinode-405494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:49:7b", ip: ""} in network mk-multinode-405494: {Iface:virbr1 ExpiryTime:2024-01-16 04:06:53 +0000 UTC Type:0 Mac:52:54:00:b0:49:7b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:multinode-405494 Clientid:01:52:54:00:b0:49:7b}
	I0116 03:07:04.015474  491150 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined IP address 192.168.39.70 and MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 03:07:04.015616  491150 main.go:141] libmachine: (multinode-405494) Calling .GetSSHPort
	I0116 03:07:04.015858  491150 main.go:141] libmachine: (multinode-405494) Calling .GetSSHKeyPath
	I0116 03:07:04.016078  491150 main.go:141] libmachine: (multinode-405494) Calling .GetSSHUsername
	I0116 03:07:04.016234  491150 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/multinode-405494/id_rsa Username:docker}
	I0116 03:07:04.110135  491150 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 03:07:04.114589  491150 command_runner.go:130] > NAME=Buildroot
	I0116 03:07:04.114611  491150 command_runner.go:130] > VERSION=2021.02.12-1-g19d536a-dirty
	I0116 03:07:04.114616  491150 command_runner.go:130] > ID=buildroot
	I0116 03:07:04.114622  491150 command_runner.go:130] > VERSION_ID=2021.02.12
	I0116 03:07:04.114627  491150 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0116 03:07:04.114872  491150 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 03:07:04.114929  491150 filesync.go:126] Scanning /home/jenkins/minikube-integration/17965-468241/.minikube/addons for local assets ...
	I0116 03:07:04.115025  491150 filesync.go:126] Scanning /home/jenkins/minikube-integration/17965-468241/.minikube/files for local assets ...
	I0116 03:07:04.115113  491150 filesync.go:149] local asset: /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem -> 4754782.pem in /etc/ssl/certs
	I0116 03:07:04.115124  491150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem -> /etc/ssl/certs/4754782.pem
	I0116 03:07:04.115204  491150 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 03:07:04.123935  491150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem --> /etc/ssl/certs/4754782.pem (1708 bytes)
	I0116 03:07:04.148734  491150 start.go:303] post-start completed in 137.474605ms
	I0116 03:07:04.148772  491150 fix.go:56] fixHost completed within 22.512542884s
	I0116 03:07:04.148810  491150 main.go:141] libmachine: (multinode-405494) Calling .GetSSHHostname
	I0116 03:07:04.152008  491150 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 03:07:04.152584  491150 main.go:141] libmachine: (multinode-405494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:49:7b", ip: ""} in network mk-multinode-405494: {Iface:virbr1 ExpiryTime:2024-01-16 04:06:53 +0000 UTC Type:0 Mac:52:54:00:b0:49:7b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:multinode-405494 Clientid:01:52:54:00:b0:49:7b}
	I0116 03:07:04.152639  491150 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined IP address 192.168.39.70 and MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 03:07:04.152932  491150 main.go:141] libmachine: (multinode-405494) Calling .GetSSHPort
	I0116 03:07:04.153191  491150 main.go:141] libmachine: (multinode-405494) Calling .GetSSHKeyPath
	I0116 03:07:04.153380  491150 main.go:141] libmachine: (multinode-405494) Calling .GetSSHKeyPath
	I0116 03:07:04.153561  491150 main.go:141] libmachine: (multinode-405494) Calling .GetSSHUsername
	I0116 03:07:04.153803  491150 main.go:141] libmachine: Using SSH client type: native
	I0116 03:07:04.154127  491150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I0116 03:07:04.154139  491150 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0116 03:07:04.281001  491150 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705374424.225507197
	
	I0116 03:07:04.281030  491150 fix.go:206] guest clock: 1705374424.225507197
	I0116 03:07:04.281043  491150 fix.go:219] Guest: 2024-01-16 03:07:04.225507197 +0000 UTC Remote: 2024-01-16 03:07:04.148784691 +0000 UTC m=+320.553935074 (delta=76.722506ms)
	I0116 03:07:04.281070  491150 fix.go:190] guest clock delta is within tolerance: 76.722506ms
	I0116 03:07:04.281076  491150 start.go:83] releasing machines lock for "multinode-405494", held for 22.644881721s
	I0116 03:07:04.281101  491150 main.go:141] libmachine: (multinode-405494) Calling .DriverName
	I0116 03:07:04.281426  491150 main.go:141] libmachine: (multinode-405494) Calling .GetIP
	I0116 03:07:04.284087  491150 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 03:07:04.284483  491150 main.go:141] libmachine: (multinode-405494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:49:7b", ip: ""} in network mk-multinode-405494: {Iface:virbr1 ExpiryTime:2024-01-16 04:06:53 +0000 UTC Type:0 Mac:52:54:00:b0:49:7b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:multinode-405494 Clientid:01:52:54:00:b0:49:7b}
	I0116 03:07:04.284517  491150 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined IP address 192.168.39.70 and MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 03:07:04.284718  491150 main.go:141] libmachine: (multinode-405494) Calling .DriverName
	I0116 03:07:04.285306  491150 main.go:141] libmachine: (multinode-405494) Calling .DriverName
	I0116 03:07:04.285488  491150 main.go:141] libmachine: (multinode-405494) Calling .DriverName
	I0116 03:07:04.285583  491150 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 03:07:04.285646  491150 main.go:141] libmachine: (multinode-405494) Calling .GetSSHHostname
	I0116 03:07:04.285739  491150 ssh_runner.go:195] Run: cat /version.json
	I0116 03:07:04.285773  491150 main.go:141] libmachine: (multinode-405494) Calling .GetSSHHostname
	I0116 03:07:04.288313  491150 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 03:07:04.288522  491150 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 03:07:04.288731  491150 main.go:141] libmachine: (multinode-405494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:49:7b", ip: ""} in network mk-multinode-405494: {Iface:virbr1 ExpiryTime:2024-01-16 04:06:53 +0000 UTC Type:0 Mac:52:54:00:b0:49:7b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:multinode-405494 Clientid:01:52:54:00:b0:49:7b}
	I0116 03:07:04.288761  491150 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined IP address 192.168.39.70 and MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 03:07:04.288931  491150 main.go:141] libmachine: (multinode-405494) Calling .GetSSHPort
	I0116 03:07:04.288964  491150 main.go:141] libmachine: (multinode-405494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:49:7b", ip: ""} in network mk-multinode-405494: {Iface:virbr1 ExpiryTime:2024-01-16 04:06:53 +0000 UTC Type:0 Mac:52:54:00:b0:49:7b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:multinode-405494 Clientid:01:52:54:00:b0:49:7b}
	I0116 03:07:04.289007  491150 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined IP address 192.168.39.70 and MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 03:07:04.289159  491150 main.go:141] libmachine: (multinode-405494) Calling .GetSSHPort
	I0116 03:07:04.289202  491150 main.go:141] libmachine: (multinode-405494) Calling .GetSSHKeyPath
	I0116 03:07:04.289309  491150 main.go:141] libmachine: (multinode-405494) Calling .GetSSHKeyPath
	I0116 03:07:04.289393  491150 main.go:141] libmachine: (multinode-405494) Calling .GetSSHUsername
	I0116 03:07:04.289488  491150 main.go:141] libmachine: (multinode-405494) Calling .GetSSHUsername
	I0116 03:07:04.289563  491150 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/multinode-405494/id_rsa Username:docker}
	I0116 03:07:04.289642  491150 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/multinode-405494/id_rsa Username:docker}
	I0116 03:07:04.400955  491150 command_runner.go:130] > {"iso_version": "v1.32.1-1703784139-17866", "kicbase_version": "v0.0.42-1703723663-17866", "minikube_version": "v1.32.0", "commit": "eb69424d8f623d7cabea57d4395ce87adf1b5fc3"}
	I0116 03:07:04.401027  491150 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0116 03:07:04.401098  491150 ssh_runner.go:195] Run: systemctl --version
	I0116 03:07:04.406671  491150 command_runner.go:130] > systemd 247 (247)
	I0116 03:07:04.406715  491150 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I0116 03:07:04.406960  491150 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 03:07:04.553778  491150 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0116 03:07:04.560187  491150 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0116 03:07:04.560247  491150 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 03:07:04.560316  491150 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 03:07:04.576417  491150 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0116 03:07:04.576506  491150 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0116 03:07:04.576517  491150 start.go:475] detecting cgroup driver to use...
	I0116 03:07:04.576643  491150 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 03:07:04.593886  491150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 03:07:04.606990  491150 docker.go:217] disabling cri-docker service (if available) ...
	I0116 03:07:04.607055  491150 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 03:07:04.620347  491150 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 03:07:04.633371  491150 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 03:07:04.647132  491150 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I0116 03:07:04.740689  491150 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 03:07:04.859566  491150 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0116 03:07:04.859633  491150 docker.go:233] disabling docker service ...
	I0116 03:07:04.859705  491150 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 03:07:04.874425  491150 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 03:07:04.887252  491150 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I0116 03:07:04.887348  491150 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 03:07:04.901979  491150 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0116 03:07:05.016212  491150 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 03:07:05.029205  491150 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I0116 03:07:05.029522  491150 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0116 03:07:05.140591  491150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 03:07:05.154282  491150 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 03:07:05.172759  491150 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0116 03:07:05.173287  491150 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0116 03:07:05.173344  491150 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:07:05.184116  491150 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 03:07:05.184194  491150 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:07:05.194634  491150 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:07:05.205173  491150 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:07:05.215574  491150 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 03:07:05.226619  491150 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 03:07:05.236981  491150 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0116 03:07:05.237041  491150 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0116 03:07:05.237093  491150 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0116 03:07:05.251902  491150 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 03:07:05.262621  491150 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 03:07:05.388600  491150 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 03:07:05.550615  491150 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 03:07:05.550710  491150 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 03:07:05.555892  491150 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0116 03:07:05.555918  491150 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0116 03:07:05.555934  491150 command_runner.go:130] > Device: 16h/22d	Inode: 748         Links: 1
	I0116 03:07:05.555943  491150 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0116 03:07:05.555948  491150 command_runner.go:130] > Access: 2024-01-16 03:07:05.482563216 +0000
	I0116 03:07:05.555957  491150 command_runner.go:130] > Modify: 2024-01-16 03:07:05.482563216 +0000
	I0116 03:07:05.555965  491150 command_runner.go:130] > Change: 2024-01-16 03:07:05.482563216 +0000
	I0116 03:07:05.555971  491150 command_runner.go:130] >  Birth: -
	I0116 03:07:05.556273  491150 start.go:543] Will wait 60s for crictl version
	I0116 03:07:05.556344  491150 ssh_runner.go:195] Run: which crictl
	I0116 03:07:05.559865  491150 command_runner.go:130] > /usr/bin/crictl
	I0116 03:07:05.560209  491150 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 03:07:05.598580  491150 command_runner.go:130] > Version:  0.1.0
	I0116 03:07:05.598603  491150 command_runner.go:130] > RuntimeName:  cri-o
	I0116 03:07:05.598608  491150 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0116 03:07:05.598615  491150 command_runner.go:130] > RuntimeApiVersion:  v1
	I0116 03:07:05.598686  491150 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0116 03:07:05.598764  491150 ssh_runner.go:195] Run: crio --version
	I0116 03:07:05.646653  491150 command_runner.go:130] > crio version 1.24.1
	I0116 03:07:05.646680  491150 command_runner.go:130] > Version:          1.24.1
	I0116 03:07:05.646687  491150 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0116 03:07:05.646698  491150 command_runner.go:130] > GitTreeState:     dirty
	I0116 03:07:05.646704  491150 command_runner.go:130] > BuildDate:        2023-12-28T22:46:29Z
	I0116 03:07:05.646709  491150 command_runner.go:130] > GoVersion:        go1.19.9
	I0116 03:07:05.646714  491150 command_runner.go:130] > Compiler:         gc
	I0116 03:07:05.646718  491150 command_runner.go:130] > Platform:         linux/amd64
	I0116 03:07:05.646723  491150 command_runner.go:130] > Linkmode:         dynamic
	I0116 03:07:05.646730  491150 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0116 03:07:05.646737  491150 command_runner.go:130] > SeccompEnabled:   true
	I0116 03:07:05.646745  491150 command_runner.go:130] > AppArmorEnabled:  false
	I0116 03:07:05.648007  491150 ssh_runner.go:195] Run: crio --version
	I0116 03:07:05.690610  491150 command_runner.go:130] > crio version 1.24.1
	I0116 03:07:05.690640  491150 command_runner.go:130] > Version:          1.24.1
	I0116 03:07:05.690659  491150 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0116 03:07:05.690667  491150 command_runner.go:130] > GitTreeState:     dirty
	I0116 03:07:05.690679  491150 command_runner.go:130] > BuildDate:        2023-12-28T22:46:29Z
	I0116 03:07:05.690687  491150 command_runner.go:130] > GoVersion:        go1.19.9
	I0116 03:07:05.690694  491150 command_runner.go:130] > Compiler:         gc
	I0116 03:07:05.690709  491150 command_runner.go:130] > Platform:         linux/amd64
	I0116 03:07:05.690718  491150 command_runner.go:130] > Linkmode:         dynamic
	I0116 03:07:05.690729  491150 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0116 03:07:05.690737  491150 command_runner.go:130] > SeccompEnabled:   true
	I0116 03:07:05.690745  491150 command_runner.go:130] > AppArmorEnabled:  false
	I0116 03:07:05.694225  491150 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0116 03:07:05.695656  491150 main.go:141] libmachine: (multinode-405494) Calling .GetIP
	I0116 03:07:05.698538  491150 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 03:07:05.698958  491150 main.go:141] libmachine: (multinode-405494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:49:7b", ip: ""} in network mk-multinode-405494: {Iface:virbr1 ExpiryTime:2024-01-16 04:06:53 +0000 UTC Type:0 Mac:52:54:00:b0:49:7b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:multinode-405494 Clientid:01:52:54:00:b0:49:7b}
	I0116 03:07:05.698996  491150 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined IP address 192.168.39.70 and MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 03:07:05.699232  491150 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0116 03:07:05.703507  491150 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 03:07:05.715495  491150 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 03:07:05.715564  491150 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 03:07:05.754634  491150 command_runner.go:130] > {
	I0116 03:07:05.754664  491150 command_runner.go:130] >   "images": [
	I0116 03:07:05.754671  491150 command_runner.go:130] >     {
	I0116 03:07:05.754683  491150 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0116 03:07:05.754690  491150 command_runner.go:130] >       "repoTags": [
	I0116 03:07:05.754699  491150 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0116 03:07:05.754705  491150 command_runner.go:130] >       ],
	I0116 03:07:05.754712  491150 command_runner.go:130] >       "repoDigests": [
	I0116 03:07:05.754726  491150 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0116 03:07:05.754742  491150 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0116 03:07:05.754753  491150 command_runner.go:130] >       ],
	I0116 03:07:05.754763  491150 command_runner.go:130] >       "size": "750414",
	I0116 03:07:05.754773  491150 command_runner.go:130] >       "uid": {
	I0116 03:07:05.754787  491150 command_runner.go:130] >         "value": "65535"
	I0116 03:07:05.754797  491150 command_runner.go:130] >       },
	I0116 03:07:05.754813  491150 command_runner.go:130] >       "username": "",
	I0116 03:07:05.754825  491150 command_runner.go:130] >       "spec": null,
	I0116 03:07:05.754832  491150 command_runner.go:130] >       "pinned": false
	I0116 03:07:05.754836  491150 command_runner.go:130] >     }
	I0116 03:07:05.754842  491150 command_runner.go:130] >   ]
	I0116 03:07:05.754846  491150 command_runner.go:130] > }
	I0116 03:07:05.754976  491150 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0116 03:07:05.755036  491150 ssh_runner.go:195] Run: which lz4
	I0116 03:07:05.759105  491150 command_runner.go:130] > /usr/bin/lz4
	I0116 03:07:05.759141  491150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0116 03:07:05.759243  491150 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0116 03:07:05.763381  491150 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0116 03:07:05.763428  491150 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0116 03:07:05.763451  491150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0116 03:07:07.633341  491150 crio.go:444] Took 1.874134 seconds to copy over tarball
	I0116 03:07:07.633437  491150 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0116 03:07:10.666722  491150 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.033251096s)
	I0116 03:07:10.666756  491150 crio.go:451] Took 3.033381 seconds to extract the tarball
	I0116 03:07:10.666765  491150 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0116 03:07:10.707661  491150 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 03:07:10.759029  491150 command_runner.go:130] > {
	I0116 03:07:10.759052  491150 command_runner.go:130] >   "images": [
	I0116 03:07:10.759057  491150 command_runner.go:130] >     {
	I0116 03:07:10.759064  491150 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I0116 03:07:10.759069  491150 command_runner.go:130] >       "repoTags": [
	I0116 03:07:10.759076  491150 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I0116 03:07:10.759079  491150 command_runner.go:130] >       ],
	I0116 03:07:10.759083  491150 command_runner.go:130] >       "repoDigests": [
	I0116 03:07:10.759092  491150 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I0116 03:07:10.759099  491150 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I0116 03:07:10.759106  491150 command_runner.go:130] >       ],
	I0116 03:07:10.759111  491150 command_runner.go:130] >       "size": "65258016",
	I0116 03:07:10.759115  491150 command_runner.go:130] >       "uid": null,
	I0116 03:07:10.759119  491150 command_runner.go:130] >       "username": "",
	I0116 03:07:10.759130  491150 command_runner.go:130] >       "spec": null,
	I0116 03:07:10.759135  491150 command_runner.go:130] >       "pinned": false
	I0116 03:07:10.759142  491150 command_runner.go:130] >     },
	I0116 03:07:10.759164  491150 command_runner.go:130] >     {
	I0116 03:07:10.759176  491150 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0116 03:07:10.759180  491150 command_runner.go:130] >       "repoTags": [
	I0116 03:07:10.759185  491150 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0116 03:07:10.759190  491150 command_runner.go:130] >       ],
	I0116 03:07:10.759194  491150 command_runner.go:130] >       "repoDigests": [
	I0116 03:07:10.759202  491150 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0116 03:07:10.759210  491150 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0116 03:07:10.759214  491150 command_runner.go:130] >       ],
	I0116 03:07:10.759223  491150 command_runner.go:130] >       "size": "31470524",
	I0116 03:07:10.759230  491150 command_runner.go:130] >       "uid": null,
	I0116 03:07:10.759242  491150 command_runner.go:130] >       "username": "",
	I0116 03:07:10.759248  491150 command_runner.go:130] >       "spec": null,
	I0116 03:07:10.759252  491150 command_runner.go:130] >       "pinned": false
	I0116 03:07:10.759256  491150 command_runner.go:130] >     },
	I0116 03:07:10.759260  491150 command_runner.go:130] >     {
	I0116 03:07:10.759266  491150 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0116 03:07:10.759273  491150 command_runner.go:130] >       "repoTags": [
	I0116 03:07:10.759279  491150 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0116 03:07:10.759283  491150 command_runner.go:130] >       ],
	I0116 03:07:10.759288  491150 command_runner.go:130] >       "repoDigests": [
	I0116 03:07:10.759296  491150 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0116 03:07:10.759303  491150 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0116 03:07:10.759308  491150 command_runner.go:130] >       ],
	I0116 03:07:10.759312  491150 command_runner.go:130] >       "size": "53621675",
	I0116 03:07:10.759316  491150 command_runner.go:130] >       "uid": null,
	I0116 03:07:10.759320  491150 command_runner.go:130] >       "username": "",
	I0116 03:07:10.759326  491150 command_runner.go:130] >       "spec": null,
	I0116 03:07:10.759330  491150 command_runner.go:130] >       "pinned": false
	I0116 03:07:10.759338  491150 command_runner.go:130] >     },
	I0116 03:07:10.759342  491150 command_runner.go:130] >     {
	I0116 03:07:10.759347  491150 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I0116 03:07:10.759352  491150 command_runner.go:130] >       "repoTags": [
	I0116 03:07:10.759357  491150 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0116 03:07:10.759361  491150 command_runner.go:130] >       ],
	I0116 03:07:10.759365  491150 command_runner.go:130] >       "repoDigests": [
	I0116 03:07:10.759372  491150 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I0116 03:07:10.759380  491150 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I0116 03:07:10.759392  491150 command_runner.go:130] >       ],
	I0116 03:07:10.759400  491150 command_runner.go:130] >       "size": "295456551",
	I0116 03:07:10.759404  491150 command_runner.go:130] >       "uid": {
	I0116 03:07:10.759410  491150 command_runner.go:130] >         "value": "0"
	I0116 03:07:10.759414  491150 command_runner.go:130] >       },
	I0116 03:07:10.759418  491150 command_runner.go:130] >       "username": "",
	I0116 03:07:10.759424  491150 command_runner.go:130] >       "spec": null,
	I0116 03:07:10.759428  491150 command_runner.go:130] >       "pinned": false
	I0116 03:07:10.759432  491150 command_runner.go:130] >     },
	I0116 03:07:10.759438  491150 command_runner.go:130] >     {
	I0116 03:07:10.759447  491150 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I0116 03:07:10.759451  491150 command_runner.go:130] >       "repoTags": [
	I0116 03:07:10.759459  491150 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I0116 03:07:10.759462  491150 command_runner.go:130] >       ],
	I0116 03:07:10.759469  491150 command_runner.go:130] >       "repoDigests": [
	I0116 03:07:10.759476  491150 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I0116 03:07:10.759486  491150 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I0116 03:07:10.759492  491150 command_runner.go:130] >       ],
	I0116 03:07:10.759497  491150 command_runner.go:130] >       "size": "127226832",
	I0116 03:07:10.759501  491150 command_runner.go:130] >       "uid": {
	I0116 03:07:10.759505  491150 command_runner.go:130] >         "value": "0"
	I0116 03:07:10.759512  491150 command_runner.go:130] >       },
	I0116 03:07:10.759516  491150 command_runner.go:130] >       "username": "",
	I0116 03:07:10.759520  491150 command_runner.go:130] >       "spec": null,
	I0116 03:07:10.759524  491150 command_runner.go:130] >       "pinned": false
	I0116 03:07:10.759533  491150 command_runner.go:130] >     },
	I0116 03:07:10.759537  491150 command_runner.go:130] >     {
	I0116 03:07:10.759547  491150 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I0116 03:07:10.759554  491150 command_runner.go:130] >       "repoTags": [
	I0116 03:07:10.759559  491150 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I0116 03:07:10.759563  491150 command_runner.go:130] >       ],
	I0116 03:07:10.759568  491150 command_runner.go:130] >       "repoDigests": [
	I0116 03:07:10.759576  491150 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I0116 03:07:10.759586  491150 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I0116 03:07:10.759591  491150 command_runner.go:130] >       ],
	I0116 03:07:10.759595  491150 command_runner.go:130] >       "size": "123261750",
	I0116 03:07:10.759599  491150 command_runner.go:130] >       "uid": {
	I0116 03:07:10.759605  491150 command_runner.go:130] >         "value": "0"
	I0116 03:07:10.759625  491150 command_runner.go:130] >       },
	I0116 03:07:10.759631  491150 command_runner.go:130] >       "username": "",
	I0116 03:07:10.759635  491150 command_runner.go:130] >       "spec": null,
	I0116 03:07:10.759644  491150 command_runner.go:130] >       "pinned": false
	I0116 03:07:10.759649  491150 command_runner.go:130] >     },
	I0116 03:07:10.759653  491150 command_runner.go:130] >     {
	I0116 03:07:10.759659  491150 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I0116 03:07:10.759669  491150 command_runner.go:130] >       "repoTags": [
	I0116 03:07:10.759677  491150 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I0116 03:07:10.759681  491150 command_runner.go:130] >       ],
	I0116 03:07:10.759688  491150 command_runner.go:130] >       "repoDigests": [
	I0116 03:07:10.759695  491150 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I0116 03:07:10.759702  491150 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I0116 03:07:10.759708  491150 command_runner.go:130] >       ],
	I0116 03:07:10.759712  491150 command_runner.go:130] >       "size": "74749335",
	I0116 03:07:10.759716  491150 command_runner.go:130] >       "uid": null,
	I0116 03:07:10.759722  491150 command_runner.go:130] >       "username": "",
	I0116 03:07:10.759726  491150 command_runner.go:130] >       "spec": null,
	I0116 03:07:10.759733  491150 command_runner.go:130] >       "pinned": false
	I0116 03:07:10.759737  491150 command_runner.go:130] >     },
	I0116 03:07:10.759740  491150 command_runner.go:130] >     {
	I0116 03:07:10.759746  491150 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I0116 03:07:10.759753  491150 command_runner.go:130] >       "repoTags": [
	I0116 03:07:10.759758  491150 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I0116 03:07:10.759762  491150 command_runner.go:130] >       ],
	I0116 03:07:10.759768  491150 command_runner.go:130] >       "repoDigests": [
	I0116 03:07:10.759790  491150 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I0116 03:07:10.759800  491150 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I0116 03:07:10.759804  491150 command_runner.go:130] >       ],
	I0116 03:07:10.759808  491150 command_runner.go:130] >       "size": "61551410",
	I0116 03:07:10.759812  491150 command_runner.go:130] >       "uid": {
	I0116 03:07:10.759816  491150 command_runner.go:130] >         "value": "0"
	I0116 03:07:10.759822  491150 command_runner.go:130] >       },
	I0116 03:07:10.759826  491150 command_runner.go:130] >       "username": "",
	I0116 03:07:10.759831  491150 command_runner.go:130] >       "spec": null,
	I0116 03:07:10.759835  491150 command_runner.go:130] >       "pinned": false
	I0116 03:07:10.759841  491150 command_runner.go:130] >     },
	I0116 03:07:10.759844  491150 command_runner.go:130] >     {
	I0116 03:07:10.759850  491150 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0116 03:07:10.759856  491150 command_runner.go:130] >       "repoTags": [
	I0116 03:07:10.759861  491150 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0116 03:07:10.759866  491150 command_runner.go:130] >       ],
	I0116 03:07:10.759870  491150 command_runner.go:130] >       "repoDigests": [
	I0116 03:07:10.759880  491150 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0116 03:07:10.759890  491150 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0116 03:07:10.759894  491150 command_runner.go:130] >       ],
	I0116 03:07:10.759898  491150 command_runner.go:130] >       "size": "750414",
	I0116 03:07:10.759902  491150 command_runner.go:130] >       "uid": {
	I0116 03:07:10.759909  491150 command_runner.go:130] >         "value": "65535"
	I0116 03:07:10.759913  491150 command_runner.go:130] >       },
	I0116 03:07:10.759919  491150 command_runner.go:130] >       "username": "",
	I0116 03:07:10.759924  491150 command_runner.go:130] >       "spec": null,
	I0116 03:07:10.759928  491150 command_runner.go:130] >       "pinned": false
	I0116 03:07:10.759933  491150 command_runner.go:130] >     }
	I0116 03:07:10.759936  491150 command_runner.go:130] >   ]
	I0116 03:07:10.759942  491150 command_runner.go:130] > }
	I0116 03:07:10.761116  491150 crio.go:496] all images are preloaded for cri-o runtime.
	I0116 03:07:10.761139  491150 cache_images.go:84] Images are preloaded, skipping loading
	I0116 03:07:10.761223  491150 ssh_runner.go:195] Run: crio config
	I0116 03:07:10.814710  491150 command_runner.go:130] ! time="2024-01-16 03:07:10.758809530Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0116 03:07:10.814740  491150 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0116 03:07:10.826283  491150 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0116 03:07:10.826317  491150 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0116 03:07:10.826328  491150 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0116 03:07:10.826333  491150 command_runner.go:130] > #
	I0116 03:07:10.826342  491150 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0116 03:07:10.826351  491150 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0116 03:07:10.826359  491150 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0116 03:07:10.826376  491150 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0116 03:07:10.826387  491150 command_runner.go:130] > # reload'.
	I0116 03:07:10.826399  491150 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0116 03:07:10.826413  491150 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0116 03:07:10.826427  491150 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0116 03:07:10.826441  491150 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0116 03:07:10.826450  491150 command_runner.go:130] > [crio]
	I0116 03:07:10.826464  491150 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0116 03:07:10.826476  491150 command_runner.go:130] > # containers images, in this directory.
	I0116 03:07:10.826488  491150 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0116 03:07:10.826529  491150 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0116 03:07:10.826542  491150 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0116 03:07:10.826554  491150 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0116 03:07:10.826568  491150 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0116 03:07:10.826580  491150 command_runner.go:130] > storage_driver = "overlay"
	I0116 03:07:10.826594  491150 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0116 03:07:10.826613  491150 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0116 03:07:10.826628  491150 command_runner.go:130] > storage_option = [
	I0116 03:07:10.826640  491150 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0116 03:07:10.826649  491150 command_runner.go:130] > ]
	I0116 03:07:10.826661  491150 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0116 03:07:10.826674  491150 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0116 03:07:10.826683  491150 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0116 03:07:10.826697  491150 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0116 03:07:10.826711  491150 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0116 03:07:10.826731  491150 command_runner.go:130] > # always happen on a node reboot
	I0116 03:07:10.826743  491150 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0116 03:07:10.826753  491150 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0116 03:07:10.826767  491150 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0116 03:07:10.826793  491150 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0116 03:07:10.826808  491150 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0116 03:07:10.826825  491150 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0116 03:07:10.826842  491150 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0116 03:07:10.826852  491150 command_runner.go:130] > # internal_wipe = true
	I0116 03:07:10.826863  491150 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0116 03:07:10.826877  491150 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0116 03:07:10.826890  491150 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0116 03:07:10.826903  491150 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0116 03:07:10.826917  491150 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0116 03:07:10.826927  491150 command_runner.go:130] > [crio.api]
	I0116 03:07:10.826938  491150 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0116 03:07:10.826948  491150 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0116 03:07:10.826959  491150 command_runner.go:130] > # IP address on which the stream server will listen.
	I0116 03:07:10.826973  491150 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0116 03:07:10.826988  491150 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0116 03:07:10.826999  491150 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0116 03:07:10.827006  491150 command_runner.go:130] > # stream_port = "0"
	I0116 03:07:10.827016  491150 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0116 03:07:10.827027  491150 command_runner.go:130] > # stream_enable_tls = false
	I0116 03:07:10.827038  491150 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0116 03:07:10.827053  491150 command_runner.go:130] > # stream_idle_timeout = ""
	I0116 03:07:10.827068  491150 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0116 03:07:10.827082  491150 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0116 03:07:10.827092  491150 command_runner.go:130] > # minutes.
	I0116 03:07:10.827102  491150 command_runner.go:130] > # stream_tls_cert = ""
	I0116 03:07:10.827116  491150 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0116 03:07:10.827129  491150 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0116 03:07:10.827138  491150 command_runner.go:130] > # stream_tls_key = ""
	I0116 03:07:10.827152  491150 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0116 03:07:10.827167  491150 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0116 03:07:10.827179  491150 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0116 03:07:10.827194  491150 command_runner.go:130] > # stream_tls_ca = ""
	I0116 03:07:10.827211  491150 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0116 03:07:10.827221  491150 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0116 03:07:10.827235  491150 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0116 03:07:10.827247  491150 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0116 03:07:10.827279  491150 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0116 03:07:10.827291  491150 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0116 03:07:10.827298  491150 command_runner.go:130] > [crio.runtime]
	I0116 03:07:10.827308  491150 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0116 03:07:10.827321  491150 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0116 03:07:10.827331  491150 command_runner.go:130] > # "nofile=1024:2048"
	I0116 03:07:10.827342  491150 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0116 03:07:10.827352  491150 command_runner.go:130] > # default_ulimits = [
	I0116 03:07:10.827361  491150 command_runner.go:130] > # ]
	I0116 03:07:10.827373  491150 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0116 03:07:10.827383  491150 command_runner.go:130] > # no_pivot = false
	I0116 03:07:10.827397  491150 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0116 03:07:10.827411  491150 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0116 03:07:10.827425  491150 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0116 03:07:10.827439  491150 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0116 03:07:10.827451  491150 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0116 03:07:10.827466  491150 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0116 03:07:10.827477  491150 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0116 03:07:10.827486  491150 command_runner.go:130] > # Cgroup setting for conmon
	I0116 03:07:10.827508  491150 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0116 03:07:10.827519  491150 command_runner.go:130] > conmon_cgroup = "pod"
	I0116 03:07:10.827533  491150 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0116 03:07:10.827545  491150 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0116 03:07:10.827560  491150 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0116 03:07:10.827571  491150 command_runner.go:130] > conmon_env = [
	I0116 03:07:10.827584  491150 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0116 03:07:10.827591  491150 command_runner.go:130] > ]
	I0116 03:07:10.827604  491150 command_runner.go:130] > # Additional environment variables to set for all the
	I0116 03:07:10.827617  491150 command_runner.go:130] > # containers. These are overridden if set in the
	I0116 03:07:10.827630  491150 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0116 03:07:10.827641  491150 command_runner.go:130] > # default_env = [
	I0116 03:07:10.827652  491150 command_runner.go:130] > # ]
	I0116 03:07:10.827665  491150 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0116 03:07:10.827676  491150 command_runner.go:130] > # selinux = false
	I0116 03:07:10.827689  491150 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0116 03:07:10.827703  491150 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0116 03:07:10.827717  491150 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0116 03:07:10.827728  491150 command_runner.go:130] > # seccomp_profile = ""
	I0116 03:07:10.827744  491150 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0116 03:07:10.827757  491150 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0116 03:07:10.827769  491150 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0116 03:07:10.827780  491150 command_runner.go:130] > # which might increase security.
	I0116 03:07:10.827793  491150 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0116 03:07:10.827807  491150 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0116 03:07:10.827821  491150 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0116 03:07:10.827835  491150 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0116 03:07:10.827849  491150 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0116 03:07:10.827863  491150 command_runner.go:130] > # This option supports live configuration reload.
	I0116 03:07:10.827875  491150 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0116 03:07:10.827891  491150 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0116 03:07:10.827903  491150 command_runner.go:130] > # the cgroup blockio controller.
	I0116 03:07:10.827911  491150 command_runner.go:130] > # blockio_config_file = ""
	I0116 03:07:10.827935  491150 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0116 03:07:10.827945  491150 command_runner.go:130] > # irqbalance daemon.
	I0116 03:07:10.827955  491150 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0116 03:07:10.827969  491150 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0116 03:07:10.827982  491150 command_runner.go:130] > # This option supports live configuration reload.
	I0116 03:07:10.827992  491150 command_runner.go:130] > # rdt_config_file = ""
	I0116 03:07:10.828002  491150 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0116 03:07:10.828013  491150 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0116 03:07:10.828028  491150 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0116 03:07:10.828049  491150 command_runner.go:130] > # separate_pull_cgroup = ""
	I0116 03:07:10.828062  491150 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0116 03:07:10.828076  491150 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0116 03:07:10.828087  491150 command_runner.go:130] > # will be added.
	I0116 03:07:10.828099  491150 command_runner.go:130] > # default_capabilities = [
	I0116 03:07:10.828109  491150 command_runner.go:130] > # 	"CHOWN",
	I0116 03:07:10.828125  491150 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0116 03:07:10.828135  491150 command_runner.go:130] > # 	"FSETID",
	I0116 03:07:10.828142  491150 command_runner.go:130] > # 	"FOWNER",
	I0116 03:07:10.828149  491150 command_runner.go:130] > # 	"SETGID",
	I0116 03:07:10.828157  491150 command_runner.go:130] > # 	"SETUID",
	I0116 03:07:10.828167  491150 command_runner.go:130] > # 	"SETPCAP",
	I0116 03:07:10.828175  491150 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0116 03:07:10.828185  491150 command_runner.go:130] > # 	"KILL",
	I0116 03:07:10.828194  491150 command_runner.go:130] > # ]
	I0116 03:07:10.828206  491150 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0116 03:07:10.828220  491150 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0116 03:07:10.828230  491150 command_runner.go:130] > # default_sysctls = [
	I0116 03:07:10.828236  491150 command_runner.go:130] > # ]
	I0116 03:07:10.828248  491150 command_runner.go:130] > # List of devices on the host that a
	I0116 03:07:10.828262  491150 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0116 03:07:10.828273  491150 command_runner.go:130] > # allowed_devices = [
	I0116 03:07:10.828281  491150 command_runner.go:130] > # 	"/dev/fuse",
	I0116 03:07:10.828290  491150 command_runner.go:130] > # ]
	I0116 03:07:10.828305  491150 command_runner.go:130] > # List of additional devices. specified as
	I0116 03:07:10.828321  491150 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0116 03:07:10.828331  491150 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0116 03:07:10.828376  491150 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0116 03:07:10.828388  491150 command_runner.go:130] > # additional_devices = [
	I0116 03:07:10.828394  491150 command_runner.go:130] > # ]
	I0116 03:07:10.828403  491150 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0116 03:07:10.828413  491150 command_runner.go:130] > # cdi_spec_dirs = [
	I0116 03:07:10.828424  491150 command_runner.go:130] > # 	"/etc/cdi",
	I0116 03:07:10.828435  491150 command_runner.go:130] > # 	"/var/run/cdi",
	I0116 03:07:10.828443  491150 command_runner.go:130] > # ]
	I0116 03:07:10.828457  491150 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0116 03:07:10.828472  491150 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0116 03:07:10.828482  491150 command_runner.go:130] > # Defaults to false.
	I0116 03:07:10.828491  491150 command_runner.go:130] > # device_ownership_from_security_context = false
	I0116 03:07:10.828510  491150 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0116 03:07:10.828523  491150 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0116 03:07:10.828534  491150 command_runner.go:130] > # hooks_dir = [
	I0116 03:07:10.828551  491150 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0116 03:07:10.828561  491150 command_runner.go:130] > # ]
	I0116 03:07:10.828572  491150 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0116 03:07:10.828587  491150 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0116 03:07:10.828600  491150 command_runner.go:130] > # its default mounts from the following two files:
	I0116 03:07:10.828608  491150 command_runner.go:130] > #
	I0116 03:07:10.828620  491150 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0116 03:07:10.828634  491150 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0116 03:07:10.828646  491150 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0116 03:07:10.828657  491150 command_runner.go:130] > #
	I0116 03:07:10.828671  491150 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0116 03:07:10.828686  491150 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0116 03:07:10.828701  491150 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0116 03:07:10.828717  491150 command_runner.go:130] > #      only add mounts it finds in this file.
	I0116 03:07:10.828726  491150 command_runner.go:130] > #
	I0116 03:07:10.828734  491150 command_runner.go:130] > # default_mounts_file = ""
	I0116 03:07:10.828751  491150 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0116 03:07:10.828767  491150 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0116 03:07:10.828780  491150 command_runner.go:130] > pids_limit = 1024
	I0116 03:07:10.828795  491150 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0116 03:07:10.828809  491150 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0116 03:07:10.828823  491150 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0116 03:07:10.828846  491150 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0116 03:07:10.828856  491150 command_runner.go:130] > # log_size_max = -1
	I0116 03:07:10.828870  491150 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0116 03:07:10.828879  491150 command_runner.go:130] > # log_to_journald = false
	I0116 03:07:10.828889  491150 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0116 03:07:10.828900  491150 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0116 03:07:10.828911  491150 command_runner.go:130] > # Path to directory for container attach sockets.
	I0116 03:07:10.828919  491150 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0116 03:07:10.828929  491150 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0116 03:07:10.828939  491150 command_runner.go:130] > # bind_mount_prefix = ""
	I0116 03:07:10.828953  491150 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0116 03:07:10.828964  491150 command_runner.go:130] > # read_only = false
	I0116 03:07:10.828978  491150 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0116 03:07:10.828992  491150 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0116 03:07:10.829009  491150 command_runner.go:130] > # live configuration reload.
	I0116 03:07:10.829019  491150 command_runner.go:130] > # log_level = "info"
	I0116 03:07:10.829029  491150 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0116 03:07:10.829042  491150 command_runner.go:130] > # This option supports live configuration reload.
	I0116 03:07:10.829052  491150 command_runner.go:130] > # log_filter = ""
	I0116 03:07:10.829064  491150 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0116 03:07:10.829078  491150 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0116 03:07:10.829089  491150 command_runner.go:130] > # separated by comma.
	I0116 03:07:10.829098  491150 command_runner.go:130] > # uid_mappings = ""
	I0116 03:07:10.829112  491150 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0116 03:07:10.829125  491150 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0116 03:07:10.829133  491150 command_runner.go:130] > # separated by comma.
	I0116 03:07:10.829143  491150 command_runner.go:130] > # gid_mappings = ""
	I0116 03:07:10.829157  491150 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0116 03:07:10.829171  491150 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0116 03:07:10.829186  491150 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0116 03:07:10.829197  491150 command_runner.go:130] > # minimum_mappable_uid = -1
	I0116 03:07:10.829209  491150 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0116 03:07:10.829226  491150 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0116 03:07:10.829240  491150 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0116 03:07:10.829251  491150 command_runner.go:130] > # minimum_mappable_gid = -1
	I0116 03:07:10.829264  491150 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0116 03:07:10.829278  491150 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0116 03:07:10.829291  491150 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0116 03:07:10.829303  491150 command_runner.go:130] > # ctr_stop_timeout = 30
	I0116 03:07:10.829316  491150 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0116 03:07:10.829332  491150 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0116 03:07:10.829344  491150 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0116 03:07:10.829356  491150 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0116 03:07:10.829377  491150 command_runner.go:130] > drop_infra_ctr = false
	I0116 03:07:10.829391  491150 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0116 03:07:10.829402  491150 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0116 03:07:10.829418  491150 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0116 03:07:10.829429  491150 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0116 03:07:10.829443  491150 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0116 03:07:10.829456  491150 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0116 03:07:10.829471  491150 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0116 03:07:10.829487  491150 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0116 03:07:10.829500  491150 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0116 03:07:10.829515  491150 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0116 03:07:10.829530  491150 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0116 03:07:10.829544  491150 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0116 03:07:10.829555  491150 command_runner.go:130] > # default_runtime = "runc"
	I0116 03:07:10.829565  491150 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0116 03:07:10.829581  491150 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0116 03:07:10.829600  491150 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0116 03:07:10.829613  491150 command_runner.go:130] > # creation as a file is not desired either.
	I0116 03:07:10.829630  491150 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0116 03:07:10.829642  491150 command_runner.go:130] > # the hostname is being managed dynamically.
	I0116 03:07:10.829654  491150 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0116 03:07:10.829660  491150 command_runner.go:130] > # ]
	I0116 03:07:10.829672  491150 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0116 03:07:10.829686  491150 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0116 03:07:10.829701  491150 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0116 03:07:10.829719  491150 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0116 03:07:10.829727  491150 command_runner.go:130] > #
	I0116 03:07:10.829737  491150 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0116 03:07:10.829749  491150 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0116 03:07:10.829760  491150 command_runner.go:130] > #  runtime_type = "oci"
	I0116 03:07:10.829772  491150 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0116 03:07:10.829782  491150 command_runner.go:130] > #  privileged_without_host_devices = false
	I0116 03:07:10.829791  491150 command_runner.go:130] > #  allowed_annotations = []
	I0116 03:07:10.829800  491150 command_runner.go:130] > # Where:
	I0116 03:07:10.829812  491150 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0116 03:07:10.829827  491150 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0116 03:07:10.829841  491150 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0116 03:07:10.829855  491150 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0116 03:07:10.829865  491150 command_runner.go:130] > #   in $PATH.
	I0116 03:07:10.829879  491150 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0116 03:07:10.829891  491150 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0116 03:07:10.829902  491150 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0116 03:07:10.829912  491150 command_runner.go:130] > #   state.
	I0116 03:07:10.829932  491150 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0116 03:07:10.829946  491150 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0116 03:07:10.829960  491150 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0116 03:07:10.829973  491150 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0116 03:07:10.829987  491150 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0116 03:07:10.830001  491150 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0116 03:07:10.830013  491150 command_runner.go:130] > #   The currently recognized values are:
	I0116 03:07:10.830028  491150 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0116 03:07:10.830043  491150 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0116 03:07:10.830057  491150 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0116 03:07:10.830070  491150 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0116 03:07:10.830084  491150 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0116 03:07:10.830099  491150 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0116 03:07:10.830114  491150 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0116 03:07:10.830128  491150 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0116 03:07:10.830140  491150 command_runner.go:130] > #   should be moved to the container's cgroup
	I0116 03:07:10.830152  491150 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0116 03:07:10.830162  491150 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0116 03:07:10.830173  491150 command_runner.go:130] > runtime_type = "oci"
	I0116 03:07:10.830190  491150 command_runner.go:130] > runtime_root = "/run/runc"
	I0116 03:07:10.830202  491150 command_runner.go:130] > runtime_config_path = ""
	I0116 03:07:10.830213  491150 command_runner.go:130] > monitor_path = ""
	I0116 03:07:10.830221  491150 command_runner.go:130] > monitor_cgroup = ""
	I0116 03:07:10.830232  491150 command_runner.go:130] > monitor_exec_cgroup = ""
	I0116 03:07:10.830244  491150 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0116 03:07:10.830255  491150 command_runner.go:130] > # running containers
	I0116 03:07:10.830265  491150 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0116 03:07:10.830276  491150 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0116 03:07:10.830356  491150 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0116 03:07:10.830377  491150 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0116 03:07:10.830385  491150 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0116 03:07:10.830396  491150 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0116 03:07:10.830406  491150 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0116 03:07:10.830414  491150 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0116 03:07:10.830425  491150 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0116 03:07:10.830436  491150 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0116 03:07:10.830454  491150 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0116 03:07:10.830467  491150 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0116 03:07:10.830482  491150 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0116 03:07:10.830502  491150 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0116 03:07:10.830517  491150 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0116 03:07:10.830523  491150 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0116 03:07:10.830538  491150 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0116 03:07:10.830548  491150 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0116 03:07:10.830557  491150 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0116 03:07:10.830566  491150 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0116 03:07:10.830570  491150 command_runner.go:130] > # Example:
	I0116 03:07:10.830577  491150 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0116 03:07:10.830582  491150 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0116 03:07:10.830589  491150 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0116 03:07:10.830595  491150 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0116 03:07:10.830601  491150 command_runner.go:130] > # cpuset = 0
	I0116 03:07:10.830606  491150 command_runner.go:130] > # cpushares = "0-1"
	I0116 03:07:10.830612  491150 command_runner.go:130] > # Where:
	I0116 03:07:10.830619  491150 command_runner.go:130] > # The workload name is workload-type.
	I0116 03:07:10.830628  491150 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0116 03:07:10.830634  491150 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0116 03:07:10.830642  491150 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0116 03:07:10.830650  491150 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0116 03:07:10.830658  491150 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0116 03:07:10.830661  491150 command_runner.go:130] > # 
	I0116 03:07:10.830671  491150 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0116 03:07:10.830676  491150 command_runner.go:130] > #
	I0116 03:07:10.830682  491150 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0116 03:07:10.830691  491150 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0116 03:07:10.830699  491150 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0116 03:07:10.830708  491150 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0116 03:07:10.830717  491150 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0116 03:07:10.830723  491150 command_runner.go:130] > [crio.image]
	I0116 03:07:10.830729  491150 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0116 03:07:10.830736  491150 command_runner.go:130] > # default_transport = "docker://"
	I0116 03:07:10.830742  491150 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0116 03:07:10.830754  491150 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0116 03:07:10.830761  491150 command_runner.go:130] > # global_auth_file = ""
	I0116 03:07:10.830766  491150 command_runner.go:130] > # The image used to instantiate infra containers.
	I0116 03:07:10.830774  491150 command_runner.go:130] > # This option supports live configuration reload.
	I0116 03:07:10.830781  491150 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0116 03:07:10.830788  491150 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0116 03:07:10.830796  491150 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0116 03:07:10.830802  491150 command_runner.go:130] > # This option supports live configuration reload.
	I0116 03:07:10.830808  491150 command_runner.go:130] > # pause_image_auth_file = ""
	I0116 03:07:10.830814  491150 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0116 03:07:10.830822  491150 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0116 03:07:10.830829  491150 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0116 03:07:10.830834  491150 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0116 03:07:10.830838  491150 command_runner.go:130] > # pause_command = "/pause"
	I0116 03:07:10.830844  491150 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0116 03:07:10.830850  491150 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0116 03:07:10.830856  491150 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0116 03:07:10.830862  491150 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0116 03:07:10.830869  491150 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0116 03:07:10.830873  491150 command_runner.go:130] > # signature_policy = ""
	I0116 03:07:10.830879  491150 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0116 03:07:10.830884  491150 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0116 03:07:10.830889  491150 command_runner.go:130] > # changing them here.
	I0116 03:07:10.830893  491150 command_runner.go:130] > # insecure_registries = [
	I0116 03:07:10.830896  491150 command_runner.go:130] > # ]
	I0116 03:07:10.830904  491150 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0116 03:07:10.830909  491150 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0116 03:07:10.830913  491150 command_runner.go:130] > # image_volumes = "mkdir"
	I0116 03:07:10.830918  491150 command_runner.go:130] > # Temporary directory to use for storing big files
	I0116 03:07:10.830922  491150 command_runner.go:130] > # big_files_temporary_dir = ""
	I0116 03:07:10.830927  491150 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0116 03:07:10.830931  491150 command_runner.go:130] > # CNI plugins.
	I0116 03:07:10.830935  491150 command_runner.go:130] > [crio.network]
	I0116 03:07:10.830940  491150 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0116 03:07:10.830945  491150 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0116 03:07:10.830949  491150 command_runner.go:130] > # cni_default_network = ""
	I0116 03:07:10.830957  491150 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0116 03:07:10.830962  491150 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0116 03:07:10.830967  491150 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0116 03:07:10.830971  491150 command_runner.go:130] > # plugin_dirs = [
	I0116 03:07:10.830975  491150 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0116 03:07:10.830978  491150 command_runner.go:130] > # ]
	I0116 03:07:10.830983  491150 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0116 03:07:10.830988  491150 command_runner.go:130] > [crio.metrics]
	I0116 03:07:10.830992  491150 command_runner.go:130] > # Globally enable or disable metrics support.
	I0116 03:07:10.830996  491150 command_runner.go:130] > enable_metrics = true
	I0116 03:07:10.831001  491150 command_runner.go:130] > # Specify enabled metrics collectors.
	I0116 03:07:10.831006  491150 command_runner.go:130] > # Per default all metrics are enabled.
	I0116 03:07:10.831012  491150 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0116 03:07:10.831018  491150 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0116 03:07:10.831027  491150 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0116 03:07:10.831031  491150 command_runner.go:130] > # metrics_collectors = [
	I0116 03:07:10.831034  491150 command_runner.go:130] > # 	"operations",
	I0116 03:07:10.831039  491150 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0116 03:07:10.831048  491150 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0116 03:07:10.831052  491150 command_runner.go:130] > # 	"operations_errors",
	I0116 03:07:10.831056  491150 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0116 03:07:10.831060  491150 command_runner.go:130] > # 	"image_pulls_by_name",
	I0116 03:07:10.831064  491150 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0116 03:07:10.831070  491150 command_runner.go:130] > # 	"image_pulls_failures",
	I0116 03:07:10.831075  491150 command_runner.go:130] > # 	"image_pulls_successes",
	I0116 03:07:10.831082  491150 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0116 03:07:10.831086  491150 command_runner.go:130] > # 	"image_layer_reuse",
	I0116 03:07:10.831090  491150 command_runner.go:130] > # 	"containers_oom_total",
	I0116 03:07:10.831096  491150 command_runner.go:130] > # 	"containers_oom",
	I0116 03:07:10.831101  491150 command_runner.go:130] > # 	"processes_defunct",
	I0116 03:07:10.831107  491150 command_runner.go:130] > # 	"operations_total",
	I0116 03:07:10.831111  491150 command_runner.go:130] > # 	"operations_latency_seconds",
	I0116 03:07:10.831118  491150 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0116 03:07:10.831122  491150 command_runner.go:130] > # 	"operations_errors_total",
	I0116 03:07:10.831127  491150 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0116 03:07:10.831131  491150 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0116 03:07:10.831142  491150 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0116 03:07:10.831150  491150 command_runner.go:130] > # 	"image_pulls_success_total",
	I0116 03:07:10.831154  491150 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0116 03:07:10.831161  491150 command_runner.go:130] > # 	"containers_oom_count_total",
	I0116 03:07:10.831165  491150 command_runner.go:130] > # ]
	I0116 03:07:10.831172  491150 command_runner.go:130] > # The port on which the metrics server will listen.
	I0116 03:07:10.831176  491150 command_runner.go:130] > # metrics_port = 9090
	I0116 03:07:10.831184  491150 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0116 03:07:10.831188  491150 command_runner.go:130] > # metrics_socket = ""
	I0116 03:07:10.831195  491150 command_runner.go:130] > # The certificate for the secure metrics server.
	I0116 03:07:10.831201  491150 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0116 03:07:10.831209  491150 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0116 03:07:10.831214  491150 command_runner.go:130] > # certificate on any modification event.
	I0116 03:07:10.831223  491150 command_runner.go:130] > # metrics_cert = ""
	I0116 03:07:10.831230  491150 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0116 03:07:10.831238  491150 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0116 03:07:10.831242  491150 command_runner.go:130] > # metrics_key = ""
	I0116 03:07:10.831250  491150 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0116 03:07:10.831257  491150 command_runner.go:130] > [crio.tracing]
	I0116 03:07:10.831264  491150 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0116 03:07:10.831271  491150 command_runner.go:130] > # enable_tracing = false
	I0116 03:07:10.831276  491150 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0116 03:07:10.831283  491150 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0116 03:07:10.831288  491150 command_runner.go:130] > # Number of samples to collect per million spans.
	I0116 03:07:10.831295  491150 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0116 03:07:10.831301  491150 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0116 03:07:10.831310  491150 command_runner.go:130] > [crio.stats]
	I0116 03:07:10.831318  491150 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0116 03:07:10.831326  491150 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0116 03:07:10.831333  491150 command_runner.go:130] > # stats_collection_period = 0
	I0116 03:07:10.831435  491150 cni.go:84] Creating CNI manager for ""
	I0116 03:07:10.831447  491150 cni.go:136] 3 nodes found, recommending kindnet
	I0116 03:07:10.831468  491150 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 03:07:10.831488  491150 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.70 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-405494 NodeName:multinode-405494 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.70"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.70 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0116 03:07:10.831650  491150 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.70
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-405494"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.70
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.70"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 03:07:10.831724  491150 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-405494 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.70
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-405494 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0116 03:07:10.831781  491150 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0116 03:07:10.843037  491150 command_runner.go:130] > kubeadm
	I0116 03:07:10.843064  491150 command_runner.go:130] > kubectl
	I0116 03:07:10.843070  491150 command_runner.go:130] > kubelet
	I0116 03:07:10.843103  491150 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 03:07:10.843165  491150 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0116 03:07:10.853688  491150 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0116 03:07:10.873822  491150 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0116 03:07:10.892149  491150 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2100 bytes)
	I0116 03:07:10.911363  491150 ssh_runner.go:195] Run: grep 192.168.39.70	control-plane.minikube.internal$ /etc/hosts
	I0116 03:07:10.915830  491150 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.70	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 03:07:10.931178  491150 certs.go:56] Setting up /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/multinode-405494 for IP: 192.168.39.70
	I0116 03:07:10.931267  491150 certs.go:190] acquiring lock for shared ca certs: {Name:mkab5aeea3e0ed69f3592dc8f418e07c6075b130 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:07:10.931483  491150 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17965-468241/.minikube/ca.key
	I0116 03:07:10.931554  491150 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.key
	I0116 03:07:10.931654  491150 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/multinode-405494/client.key
	I0116 03:07:10.931730  491150 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/multinode-405494/apiserver.key.5467de6f
	I0116 03:07:10.931769  491150 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/multinode-405494/proxy-client.key
	I0116 03:07:10.931782  491150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/multinode-405494/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0116 03:07:10.931799  491150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/multinode-405494/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0116 03:07:10.931814  491150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/multinode-405494/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0116 03:07:10.931827  491150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/multinode-405494/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0116 03:07:10.931841  491150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0116 03:07:10.931858  491150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0116 03:07:10.931870  491150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0116 03:07:10.931886  491150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0116 03:07:10.931944  491150 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478.pem (1338 bytes)
	W0116 03:07:10.931970  491150 certs.go:433] ignoring /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478_empty.pem, impossibly tiny 0 bytes
	I0116 03:07:10.931978  491150 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca-key.pem (1679 bytes)
	I0116 03:07:10.931999  491150 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem (1078 bytes)
	I0116 03:07:10.932024  491150 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem (1123 bytes)
	I0116 03:07:10.932089  491150 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem (1679 bytes)
	I0116 03:07:10.932140  491150 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem (1708 bytes)
	I0116 03:07:10.932175  491150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:07:10.932194  491150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478.pem -> /usr/share/ca-certificates/475478.pem
	I0116 03:07:10.932211  491150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem -> /usr/share/ca-certificates/4754782.pem
	I0116 03:07:10.932978  491150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/multinode-405494/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0116 03:07:10.958552  491150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/multinode-405494/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0116 03:07:10.984134  491150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/multinode-405494/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0116 03:07:11.009134  491150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/multinode-405494/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0116 03:07:11.037453  491150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 03:07:11.062844  491150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0116 03:07:11.088477  491150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 03:07:11.115099  491150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 03:07:11.141025  491150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 03:07:11.166862  491150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478.pem --> /usr/share/ca-certificates/475478.pem (1338 bytes)
	I0116 03:07:11.191774  491150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem --> /usr/share/ca-certificates/4754782.pem (1708 bytes)
	I0116 03:07:11.216935  491150 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0116 03:07:11.234755  491150 ssh_runner.go:195] Run: openssl version
	I0116 03:07:11.240560  491150 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0116 03:07:11.240642  491150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4754782.pem && ln -fs /usr/share/ca-certificates/4754782.pem /etc/ssl/certs/4754782.pem"
	I0116 03:07:11.251763  491150 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4754782.pem
	I0116 03:07:11.257249  491150 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan 16 02:44 /usr/share/ca-certificates/4754782.pem
	I0116 03:07:11.257286  491150 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 02:44 /usr/share/ca-certificates/4754782.pem
	I0116 03:07:11.257332  491150 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4754782.pem
	I0116 03:07:11.263780  491150 command_runner.go:130] > 3ec20f2e
	I0116 03:07:11.263884  491150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4754782.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 03:07:11.275375  491150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 03:07:11.286855  491150 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:07:11.292606  491150 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan 16 02:35 /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:07:11.292645  491150 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 02:35 /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:07:11.292696  491150 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:07:11.298516  491150 command_runner.go:130] > b5213941
	I0116 03:07:11.298596  491150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 03:07:11.310001  491150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/475478.pem && ln -fs /usr/share/ca-certificates/475478.pem /etc/ssl/certs/475478.pem"
	I0116 03:07:11.320818  491150 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/475478.pem
	I0116 03:07:11.326041  491150 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan 16 02:44 /usr/share/ca-certificates/475478.pem
	I0116 03:07:11.326076  491150 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 02:44 /usr/share/ca-certificates/475478.pem
	I0116 03:07:11.326124  491150 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/475478.pem
	I0116 03:07:11.332643  491150 command_runner.go:130] > 51391683
	I0116 03:07:11.332745  491150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/475478.pem /etc/ssl/certs/51391683.0"
	I0116 03:07:11.343907  491150 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 03:07:11.348935  491150 command_runner.go:130] > ca.crt
	I0116 03:07:11.348958  491150 command_runner.go:130] > ca.key
	I0116 03:07:11.348964  491150 command_runner.go:130] > healthcheck-client.crt
	I0116 03:07:11.348972  491150 command_runner.go:130] > healthcheck-client.key
	I0116 03:07:11.348976  491150 command_runner.go:130] > peer.crt
	I0116 03:07:11.348980  491150 command_runner.go:130] > peer.key
	I0116 03:07:11.348984  491150 command_runner.go:130] > server.crt
	I0116 03:07:11.348990  491150 command_runner.go:130] > server.key
	I0116 03:07:11.349074  491150 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0116 03:07:11.355322  491150 command_runner.go:130] > Certificate will not expire
	I0116 03:07:11.355415  491150 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0116 03:07:11.361415  491150 command_runner.go:130] > Certificate will not expire
	I0116 03:07:11.361679  491150 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0116 03:07:11.367670  491150 command_runner.go:130] > Certificate will not expire
	I0116 03:07:11.368055  491150 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0116 03:07:11.373966  491150 command_runner.go:130] > Certificate will not expire
	I0116 03:07:11.374182  491150 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0116 03:07:11.380331  491150 command_runner.go:130] > Certificate will not expire
	I0116 03:07:11.380415  491150 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0116 03:07:11.386413  491150 command_runner.go:130] > Certificate will not expire
	I0116 03:07:11.386689  491150 kubeadm.go:404] StartCluster: {Name:multinode-405494 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.4 ClusterName:multinode-405494 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.70 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.32 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.182 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 03:07:11.386876  491150 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0116 03:07:11.386949  491150 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 03:07:11.426396  491150 cri.go:89] found id: ""
	I0116 03:07:11.426496  491150 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0116 03:07:11.436924  491150 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0116 03:07:11.436959  491150 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0116 03:07:11.436967  491150 command_runner.go:130] > /var/lib/minikube/etcd:
	I0116 03:07:11.436973  491150 command_runner.go:130] > member
	I0116 03:07:11.437006  491150 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0116 03:07:11.437016  491150 kubeadm.go:636] restartCluster start
	I0116 03:07:11.437070  491150 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0116 03:07:11.447630  491150 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:07:11.448276  491150 kubeconfig.go:92] found "multinode-405494" server: "https://192.168.39.70:8443"
	I0116 03:07:11.448789  491150 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17965-468241/kubeconfig
	I0116 03:07:11.449081  491150 kapi.go:59] client config for multinode-405494: &rest.Config{Host:"https://192.168.39.70:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17965-468241/.minikube/profiles/multinode-405494/client.crt", KeyFile:"/home/jenkins/minikube-integration/17965-468241/.minikube/profiles/multinode-405494/client.key", CAFile:"/home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19be0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0116 03:07:11.449781  491150 cert_rotation.go:137] Starting client certificate rotation controller
	I0116 03:07:11.450017  491150 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0116 03:07:11.459508  491150 api_server.go:166] Checking apiserver status ...
	I0116 03:07:11.459574  491150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:07:11.470975  491150 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:07:11.960243  491150 api_server.go:166] Checking apiserver status ...
	I0116 03:07:11.960409  491150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:07:11.971895  491150 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:07:12.460501  491150 api_server.go:166] Checking apiserver status ...
	I0116 03:07:12.460608  491150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:07:12.473624  491150 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:07:12.960139  491150 api_server.go:166] Checking apiserver status ...
	I0116 03:07:12.960223  491150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:07:12.972631  491150 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:07:13.460265  491150 api_server.go:166] Checking apiserver status ...
	I0116 03:07:13.460390  491150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:07:13.472569  491150 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:07:13.959589  491150 api_server.go:166] Checking apiserver status ...
	I0116 03:07:13.959692  491150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:07:13.971541  491150 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:07:14.460128  491150 api_server.go:166] Checking apiserver status ...
	I0116 03:07:14.460245  491150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:07:14.471617  491150 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:07:14.960228  491150 api_server.go:166] Checking apiserver status ...
	I0116 03:07:14.960373  491150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:07:14.971343  491150 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:07:15.459869  491150 api_server.go:166] Checking apiserver status ...
	I0116 03:07:15.459957  491150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:07:15.470899  491150 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:07:15.960610  491150 api_server.go:166] Checking apiserver status ...
	I0116 03:07:15.960719  491150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:07:15.972372  491150 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:07:16.459848  491150 api_server.go:166] Checking apiserver status ...
	I0116 03:07:16.459960  491150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:07:16.470679  491150 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:07:16.959628  491150 api_server.go:166] Checking apiserver status ...
	I0116 03:07:16.959735  491150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:07:16.971015  491150 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:07:17.459599  491150 api_server.go:166] Checking apiserver status ...
	I0116 03:07:17.459692  491150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:07:17.471414  491150 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:07:17.959958  491150 api_server.go:166] Checking apiserver status ...
	I0116 03:07:17.960088  491150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:07:17.971646  491150 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:07:18.460280  491150 api_server.go:166] Checking apiserver status ...
	I0116 03:07:18.460370  491150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:07:18.472129  491150 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:07:18.959711  491150 api_server.go:166] Checking apiserver status ...
	I0116 03:07:18.959837  491150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:07:18.972462  491150 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:07:19.459987  491150 api_server.go:166] Checking apiserver status ...
	I0116 03:07:19.460109  491150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:07:19.471433  491150 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:07:19.960021  491150 api_server.go:166] Checking apiserver status ...
	I0116 03:07:19.960175  491150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:07:19.971406  491150 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:07:20.459917  491150 api_server.go:166] Checking apiserver status ...
	I0116 03:07:20.460022  491150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:07:20.471581  491150 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:07:20.960266  491150 api_server.go:166] Checking apiserver status ...
	I0116 03:07:20.960376  491150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:07:20.971672  491150 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:07:21.460564  491150 api_server.go:166] Checking apiserver status ...
	I0116 03:07:21.460733  491150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:07:21.472455  491150 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:07:21.472528  491150 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0116 03:07:21.472560  491150 kubeadm.go:1135] stopping kube-system containers ...
	I0116 03:07:21.472591  491150 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0116 03:07:21.472670  491150 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 03:07:21.512815  491150 cri.go:89] found id: ""
	I0116 03:07:21.512898  491150 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0116 03:07:21.528351  491150 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 03:07:21.537389  491150 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0116 03:07:21.537421  491150 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0116 03:07:21.537429  491150 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0116 03:07:21.537444  491150 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 03:07:21.537484  491150 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 03:07:21.537545  491150 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 03:07:21.547825  491150 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0116 03:07:21.547852  491150 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:07:21.680603  491150 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0116 03:07:21.681203  491150 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0116 03:07:21.681736  491150 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0116 03:07:21.682265  491150 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0116 03:07:21.683085  491150 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0116 03:07:21.683567  491150 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0116 03:07:21.684526  491150 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0116 03:07:21.685063  491150 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0116 03:07:21.685558  491150 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0116 03:07:21.686011  491150 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0116 03:07:21.686407  491150 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0116 03:07:21.687154  491150 command_runner.go:130] > [certs] Using the existing "sa" key
	I0116 03:07:21.688697  491150 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:07:21.743568  491150 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0116 03:07:21.854448  491150 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0116 03:07:21.959753  491150 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0116 03:07:22.095679  491150 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0116 03:07:22.387325  491150 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0116 03:07:22.390211  491150 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:07:22.465703  491150 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0116 03:07:22.467541  491150 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0116 03:07:22.467562  491150 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0116 03:07:22.599453  491150 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:07:22.664648  491150 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0116 03:07:22.664672  491150 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0116 03:07:22.674555  491150 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0116 03:07:22.677948  491150 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0116 03:07:22.682423  491150 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:07:22.755459  491150 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0116 03:07:22.763827  491150 api_server.go:52] waiting for apiserver process to appear ...
	I0116 03:07:22.763921  491150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:07:23.264179  491150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:07:23.764649  491150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:07:24.264043  491150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:07:24.764672  491150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:07:25.264438  491150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:07:25.298096  491150 command_runner.go:130] > 1072
	I0116 03:07:25.298147  491150 api_server.go:72] duration metric: took 2.534325256s to wait for apiserver process to appear ...
	I0116 03:07:25.298157  491150 api_server.go:88] waiting for apiserver healthz status ...
	I0116 03:07:25.298178  491150 api_server.go:253] Checking apiserver healthz at https://192.168.39.70:8443/healthz ...
	I0116 03:07:25.298681  491150 api_server.go:269] stopped: https://192.168.39.70:8443/healthz: Get "https://192.168.39.70:8443/healthz": dial tcp 192.168.39.70:8443: connect: connection refused
	I0116 03:07:25.798261  491150 api_server.go:253] Checking apiserver healthz at https://192.168.39.70:8443/healthz ...
	I0116 03:07:28.827865  491150 api_server.go:279] https://192.168.39.70:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 03:07:28.827906  491150 api_server.go:103] status: https://192.168.39.70:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 03:07:28.827925  491150 api_server.go:253] Checking apiserver healthz at https://192.168.39.70:8443/healthz ...
	I0116 03:07:28.886280  491150 api_server.go:279] https://192.168.39.70:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 03:07:28.886313  491150 api_server.go:103] status: https://192.168.39.70:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 03:07:29.298847  491150 api_server.go:253] Checking apiserver healthz at https://192.168.39.70:8443/healthz ...
	I0116 03:07:29.304728  491150 api_server.go:279] https://192.168.39.70:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 03:07:29.304761  491150 api_server.go:103] status: https://192.168.39.70:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 03:07:29.798273  491150 api_server.go:253] Checking apiserver healthz at https://192.168.39.70:8443/healthz ...
	I0116 03:07:29.804003  491150 api_server.go:279] https://192.168.39.70:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 03:07:29.804049  491150 api_server.go:103] status: https://192.168.39.70:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 03:07:30.298588  491150 api_server.go:253] Checking apiserver healthz at https://192.168.39.70:8443/healthz ...
	I0116 03:07:30.304544  491150 api_server.go:279] https://192.168.39.70:8443/healthz returned 200:
	ok
	I0116 03:07:30.304683  491150 round_trippers.go:463] GET https://192.168.39.70:8443/version
	I0116 03:07:30.304692  491150 round_trippers.go:469] Request Headers:
	I0116 03:07:30.304701  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:07:30.304707  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:07:30.316240  491150 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0116 03:07:30.316266  491150 round_trippers.go:577] Response Headers:
	I0116 03:07:30.316276  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:07:30 GMT
	I0116 03:07:30.316284  491150 round_trippers.go:580]     Audit-Id: 9b6bc209-24e8-441b-8c88-5615ecaca456
	I0116 03:07:30.316290  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:07:30.316297  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:07:30.316304  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:07:30.316312  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:07:30.316320  491150 round_trippers.go:580]     Content-Length: 264
	I0116 03:07:30.316374  491150 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0116 03:07:30.316500  491150 api_server.go:141] control plane version: v1.28.4
	I0116 03:07:30.316528  491150 api_server.go:131] duration metric: took 5.018363592s to wait for apiserver health ...
	I0116 03:07:30.316542  491150 cni.go:84] Creating CNI manager for ""
	I0116 03:07:30.316552  491150 cni.go:136] 3 nodes found, recommending kindnet
	I0116 03:07:30.319007  491150 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0116 03:07:30.320661  491150 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0116 03:07:30.326313  491150 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0116 03:07:30.326356  491150 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0116 03:07:30.326368  491150 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0116 03:07:30.326378  491150 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0116 03:07:30.326386  491150 command_runner.go:130] > Access: 2024-01-16 03:06:53.963563216 +0000
	I0116 03:07:30.326393  491150 command_runner.go:130] > Modify: 2023-12-28 22:53:36.000000000 +0000
	I0116 03:07:30.326401  491150 command_runner.go:130] > Change: 2024-01-16 03:06:52.021563216 +0000
	I0116 03:07:30.326408  491150 command_runner.go:130] >  Birth: -
	I0116 03:07:30.326514  491150 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0116 03:07:30.326543  491150 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0116 03:07:30.351395  491150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0116 03:07:31.487051  491150 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0116 03:07:31.491889  491150 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0116 03:07:31.504556  491150 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0116 03:07:31.560894  491150 command_runner.go:130] > daemonset.apps/kindnet configured
	I0116 03:07:31.567720  491150 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.216273063s)
	I0116 03:07:31.567775  491150 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 03:07:31.567914  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods
	I0116 03:07:31.567929  491150 round_trippers.go:469] Request Headers:
	I0116 03:07:31.567942  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:07:31.567952  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:07:31.578603  491150 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0116 03:07:31.578636  491150 round_trippers.go:577] Response Headers:
	I0116 03:07:31.578646  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:07:31.578652  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:07:31.578657  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:07:31 GMT
	I0116 03:07:31.578662  491150 round_trippers.go:580]     Audit-Id: cff6d802-7765-4db7-836e-7128339573b1
	I0116 03:07:31.578667  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:07:31.578673  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:07:31.580465  491150 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"815"},"items":[{"metadata":{"name":"coredns-5dd5756b68-vwqvk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"096151e2-c59c-4dcf-bd29-2029901902c9","resourceVersion":"798","creationTimestamp":"2024-01-16T02:57:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c3967b3a-de7c-4ccd-af3a-dd2a9e8b71f8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c3967b3a-de7c-4ccd-af3a-dd2a9e8b71f8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 83099 chars]
	I0116 03:07:31.584674  491150 system_pods.go:59] 12 kube-system pods found
	I0116 03:07:31.584713  491150 system_pods.go:61] "coredns-5dd5756b68-vwqvk" [096151e2-c59c-4dcf-bd29-2029901902c9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0116 03:07:31.584726  491150 system_pods.go:61] "etcd-multinode-405494" [3f839da7-c0c0-4546-8848-1557cbf50722] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0116 03:07:31.584737  491150 system_pods.go:61] "kindnet-6zhtt" [cb3b1d86-ad5f-404c-84f7-f51f255843fc] Running
	I0116 03:07:31.584752  491150 system_pods.go:61] "kindnet-8t86n" [4d421823-26dd-467d-94d4-28387c8e3793] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0116 03:07:31.584776  491150 system_pods.go:61] "kindnet-ddd2h" [9a8dfd54-cf69-402a-9233-af3a696abaa0] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0116 03:07:31.584784  491150 system_pods.go:61] "kube-apiserver-multinode-405494" [e242d3cf-6cf7-4b47-8d3e-a83e484554a1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0116 03:07:31.584793  491150 system_pods.go:61] "kube-controller-manager-multinode-405494" [0833b412-8909-4660-8e16-19701683358e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0116 03:07:31.584808  491150 system_pods.go:61] "kube-proxy-gg8kv" [32841b88-1b06-46ed-b4ce-f73301ec0a85] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0116 03:07:31.584814  491150 system_pods.go:61] "kube-proxy-ghscp" [62b6191a-df8d-444d-9176-3f265fd2084d] Running
	I0116 03:07:31.584819  491150 system_pods.go:61] "kube-proxy-m46rb" [960fb4d4-836f-42c5-9d56-03daae9f5a12] Running
	I0116 03:07:31.584827  491150 system_pods.go:61] "kube-scheduler-multinode-405494" [70c980cb-4ff9-45f5-960f-d8afa355229c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0116 03:07:31.584838  491150 system_pods.go:61] "storage-provisioner" [c6f12cfa-46b3-4840-a7e2-258c063a19c2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0116 03:07:31.584859  491150 system_pods.go:74] duration metric: took 17.074254ms to wait for pod list to return data ...
	I0116 03:07:31.584872  491150 node_conditions.go:102] verifying NodePressure condition ...
	I0116 03:07:31.584962  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes
	I0116 03:07:31.584976  491150 round_trippers.go:469] Request Headers:
	I0116 03:07:31.584987  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:07:31.585007  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:07:31.590041  491150 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0116 03:07:31.590065  491150 round_trippers.go:577] Response Headers:
	I0116 03:07:31.590135  491150 round_trippers.go:580]     Audit-Id: b4b2957b-5ec8-4d76-b0e0-8a17a8cf0fbe
	I0116 03:07:31.590149  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:07:31.590154  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:07:31.590161  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:07:31.590170  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:07:31.590179  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:07:31 GMT
	I0116 03:07:31.590497  491150 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"817"},"items":[{"metadata":{"name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","resourceVersion":"732","creationTimestamp":"2024-01-16T02:57:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_57_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 15905 chars]
	I0116 03:07:31.591702  491150 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 03:07:31.591735  491150 node_conditions.go:123] node cpu capacity is 2
	I0116 03:07:31.591748  491150 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 03:07:31.591752  491150 node_conditions.go:123] node cpu capacity is 2
	I0116 03:07:31.591758  491150 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 03:07:31.591764  491150 node_conditions.go:123] node cpu capacity is 2
	I0116 03:07:31.591770  491150 node_conditions.go:105] duration metric: took 6.888993ms to run NodePressure ...
	I0116 03:07:31.591793  491150 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:07:32.028100  491150 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0116 03:07:32.028130  491150 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0116 03:07:32.028169  491150 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0116 03:07:32.028283  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0116 03:07:32.028297  491150 round_trippers.go:469] Request Headers:
	I0116 03:07:32.028308  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:07:32.028314  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:07:32.033379  491150 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0116 03:07:32.033408  491150 round_trippers.go:577] Response Headers:
	I0116 03:07:32.033419  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:07:32.033428  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:07:32 GMT
	I0116 03:07:32.033437  491150 round_trippers.go:580]     Audit-Id: 59907844-173a-4229-b2d8-3fedb268eeff
	I0116 03:07:32.033446  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:07:32.033455  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:07:32.033463  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:07:32.034254  491150 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"838"},"items":[{"metadata":{"name":"etcd-multinode-405494","namespace":"kube-system","uid":"3f839da7-c0c0-4546-8848-1557cbf50722","resourceVersion":"794","creationTimestamp":"2024-01-16T02:57:12Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.70:2379","kubernetes.io/config.hash":"3a9e67d0e87fe64d9531234ab850034d","kubernetes.io/config.mirror":"3a9e67d0e87fe64d9531234ab850034d","kubernetes.io/config.seen":"2024-01-16T02:57:11.711592151Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations
":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:ku [truncated 28859 chars]
	I0116 03:07:32.035356  491150 kubeadm.go:787] kubelet initialised
	I0116 03:07:32.035380  491150 kubeadm.go:788] duration metric: took 7.200831ms waiting for restarted kubelet to initialise ...
	I0116 03:07:32.035387  491150 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:07:32.035481  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods
	I0116 03:07:32.035497  491150 round_trippers.go:469] Request Headers:
	I0116 03:07:32.035510  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:07:32.035522  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:07:32.039290  491150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:07:32.039311  491150 round_trippers.go:577] Response Headers:
	I0116 03:07:32.039320  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:07:32.039328  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:07:32 GMT
	I0116 03:07:32.039336  491150 round_trippers.go:580]     Audit-Id: d6070e34-4159-4a5a-a627-36215b201d8d
	I0116 03:07:32.039344  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:07:32.039352  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:07:32.039359  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:07:32.041355  491150 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"838"},"items":[{"metadata":{"name":"coredns-5dd5756b68-vwqvk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"096151e2-c59c-4dcf-bd29-2029901902c9","resourceVersion":"798","creationTimestamp":"2024-01-16T02:57:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c3967b3a-de7c-4ccd-af3a-dd2a9e8b71f8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c3967b3a-de7c-4ccd-af3a-dd2a9e8b71f8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 83579 chars]
	I0116 03:07:32.044073  491150 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-vwqvk" in "kube-system" namespace to be "Ready" ...
	I0116 03:07:32.044190  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-vwqvk
	I0116 03:07:32.044202  491150 round_trippers.go:469] Request Headers:
	I0116 03:07:32.044210  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:07:32.044216  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:07:32.053476  491150 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0116 03:07:32.053510  491150 round_trippers.go:577] Response Headers:
	I0116 03:07:32.053521  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:07:32.053527  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:07:32.053532  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:07:32.053538  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:07:32 GMT
	I0116 03:07:32.053543  491150 round_trippers.go:580]     Audit-Id: b39e1dea-b709-430e-9941-c49812373fda
	I0116 03:07:32.053548  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:07:32.054297  491150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-vwqvk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"096151e2-c59c-4dcf-bd29-2029901902c9","resourceVersion":"798","creationTimestamp":"2024-01-16T02:57:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c3967b3a-de7c-4ccd-af3a-dd2a9e8b71f8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c3967b3a-de7c-4ccd-af3a-dd2a9e8b71f8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I0116 03:07:32.054872  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494
	I0116 03:07:32.054894  491150 round_trippers.go:469] Request Headers:
	I0116 03:07:32.054902  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:07:32.054908  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:07:32.058326  491150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:07:32.058355  491150 round_trippers.go:577] Response Headers:
	I0116 03:07:32.058364  491150 round_trippers.go:580]     Audit-Id: a405dc5b-c13a-4a70-b37d-88d17e62e04d
	I0116 03:07:32.058370  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:07:32.058414  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:07:32.058425  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:07:32.058435  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:07:32.058444  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:07:32 GMT
	I0116 03:07:32.058668  491150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","resourceVersion":"732","creationTimestamp":"2024-01-16T02:57:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_57_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:57:07Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0116 03:07:32.059007  491150 pod_ready.go:97] node "multinode-405494" hosting pod "coredns-5dd5756b68-vwqvk" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-405494" has status "Ready":"False"
	I0116 03:07:32.059031  491150 pod_ready.go:81] duration metric: took 14.930109ms waiting for pod "coredns-5dd5756b68-vwqvk" in "kube-system" namespace to be "Ready" ...
	E0116 03:07:32.059044  491150 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-405494" hosting pod "coredns-5dd5756b68-vwqvk" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-405494" has status "Ready":"False"
	I0116 03:07:32.059058  491150 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-405494" in "kube-system" namespace to be "Ready" ...
	I0116 03:07:32.059146  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-405494
	I0116 03:07:32.059159  491150 round_trippers.go:469] Request Headers:
	I0116 03:07:32.059169  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:07:32.059175  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:07:32.061417  491150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:07:32.061436  491150 round_trippers.go:577] Response Headers:
	I0116 03:07:32.061443  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:07:32.061448  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:07:32 GMT
	I0116 03:07:32.061454  491150 round_trippers.go:580]     Audit-Id: 93227eb6-0009-400c-97cd-faeec54954ff
	I0116 03:07:32.061462  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:07:32.061470  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:07:32.061489  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:07:32.061603  491150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-405494","namespace":"kube-system","uid":"3f839da7-c0c0-4546-8848-1557cbf50722","resourceVersion":"794","creationTimestamp":"2024-01-16T02:57:12Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.70:2379","kubernetes.io/config.hash":"3a9e67d0e87fe64d9531234ab850034d","kubernetes.io/config.mirror":"3a9e67d0e87fe64d9531234ab850034d","kubernetes.io/config.seen":"2024-01-16T02:57:11.711592151Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6067 chars]
	I0116 03:07:32.061990  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494
	I0116 03:07:32.062002  491150 round_trippers.go:469] Request Headers:
	I0116 03:07:32.062013  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:07:32.062019  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:07:32.067411  491150 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0116 03:07:32.067435  491150 round_trippers.go:577] Response Headers:
	I0116 03:07:32.067442  491150 round_trippers.go:580]     Audit-Id: 9a0af7d0-aa00-4da0-831f-baea1837f5ae
	I0116 03:07:32.067448  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:07:32.067453  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:07:32.067458  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:07:32.067463  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:07:32.067468  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:07:32 GMT
	I0116 03:07:32.069105  491150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","resourceVersion":"732","creationTimestamp":"2024-01-16T02:57:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_57_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:57:07Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0116 03:07:32.069459  491150 pod_ready.go:97] node "multinode-405494" hosting pod "etcd-multinode-405494" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-405494" has status "Ready":"False"
	I0116 03:07:32.069483  491150 pod_ready.go:81] duration metric: took 10.415484ms waiting for pod "etcd-multinode-405494" in "kube-system" namespace to be "Ready" ...
	E0116 03:07:32.069492  491150 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-405494" hosting pod "etcd-multinode-405494" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-405494" has status "Ready":"False"
	I0116 03:07:32.069507  491150 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-405494" in "kube-system" namespace to be "Ready" ...
	I0116 03:07:32.069568  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-405494
	I0116 03:07:32.069576  491150 round_trippers.go:469] Request Headers:
	I0116 03:07:32.069582  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:07:32.069590  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:07:32.074281  491150 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 03:07:32.074307  491150 round_trippers.go:577] Response Headers:
	I0116 03:07:32.074315  491150 round_trippers.go:580]     Audit-Id: 798823aa-5d1a-4742-b115-d1b774299bc6
	I0116 03:07:32.074320  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:07:32.074325  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:07:32.074330  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:07:32.074335  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:07:32.074343  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:07:32 GMT
	I0116 03:07:32.075116  491150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-405494","namespace":"kube-system","uid":"e242d3cf-6cf7-4b47-8d3e-a83e484554a1","resourceVersion":"795","creationTimestamp":"2024-01-16T02:57:09Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.70:8443","kubernetes.io/config.hash":"04bffd1a6d3ee0aae068c41e37830c9b","kubernetes.io/config.mirror":"04bffd1a6d3ee0aae068c41e37830c9b","kubernetes.io/config.seen":"2024-01-16T02:57:02.078602539Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7624 chars]
	I0116 03:07:32.075559  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494
	I0116 03:07:32.075571  491150 round_trippers.go:469] Request Headers:
	I0116 03:07:32.075578  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:07:32.075584  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:07:32.078192  491150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:07:32.078213  491150 round_trippers.go:577] Response Headers:
	I0116 03:07:32.078223  491150 round_trippers.go:580]     Audit-Id: 23d817c3-41df-447d-8908-a51d8e27f9ad
	I0116 03:07:32.078232  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:07:32.078241  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:07:32.078250  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:07:32.078257  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:07:32.078264  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:07:32 GMT
	I0116 03:07:32.078457  491150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","resourceVersion":"732","creationTimestamp":"2024-01-16T02:57:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_57_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:57:07Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0116 03:07:32.078842  491150 pod_ready.go:97] node "multinode-405494" hosting pod "kube-apiserver-multinode-405494" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-405494" has status "Ready":"False"
	I0116 03:07:32.078867  491150 pod_ready.go:81] duration metric: took 9.350648ms waiting for pod "kube-apiserver-multinode-405494" in "kube-system" namespace to be "Ready" ...
	E0116 03:07:32.078881  491150 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-405494" hosting pod "kube-apiserver-multinode-405494" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-405494" has status "Ready":"False"
	I0116 03:07:32.078898  491150 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-405494" in "kube-system" namespace to be "Ready" ...
	I0116 03:07:32.078971  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-405494
	I0116 03:07:32.078986  491150 round_trippers.go:469] Request Headers:
	I0116 03:07:32.078997  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:07:32.079005  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:07:32.092065  491150 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0116 03:07:32.092092  491150 round_trippers.go:577] Response Headers:
	I0116 03:07:32.092100  491150 round_trippers.go:580]     Audit-Id: f4427eca-e2ad-49f0-81a0-524d063bb3bb
	I0116 03:07:32.092106  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:07:32.092112  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:07:32.092117  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:07:32.092122  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:07:32.092127  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:07:32 GMT
	I0116 03:07:32.093025  491150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-405494","namespace":"kube-system","uid":"0833b412-8909-4660-8e16-19701683358e","resourceVersion":"796","creationTimestamp":"2024-01-16T02:57:12Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"9eb78063d6e219f3cc5940494bdab4b2","kubernetes.io/config.mirror":"9eb78063d6e219f3cc5940494bdab4b2","kubernetes.io/config.seen":"2024-01-16T02:57:11.711589408Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7212 chars]
	I0116 03:07:32.093504  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494
	I0116 03:07:32.093522  491150 round_trippers.go:469] Request Headers:
	I0116 03:07:32.093530  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:07:32.093536  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:07:32.096725  491150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:07:32.096747  491150 round_trippers.go:577] Response Headers:
	I0116 03:07:32.096755  491150 round_trippers.go:580]     Audit-Id: 4b68f408-ddde-48be-9baf-c9c0fdd3cb08
	I0116 03:07:32.096760  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:07:32.096765  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:07:32.096770  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:07:32.096775  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:07:32.096780  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:07:32 GMT
	I0116 03:07:32.096975  491150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","resourceVersion":"732","creationTimestamp":"2024-01-16T02:57:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_57_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:57:07Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0116 03:07:32.097334  491150 pod_ready.go:97] node "multinode-405494" hosting pod "kube-controller-manager-multinode-405494" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-405494" has status "Ready":"False"
	I0116 03:07:32.097356  491150 pod_ready.go:81] duration metric: took 18.448451ms waiting for pod "kube-controller-manager-multinode-405494" in "kube-system" namespace to be "Ready" ...
	E0116 03:07:32.097366  491150 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-405494" hosting pod "kube-controller-manager-multinode-405494" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-405494" has status "Ready":"False"
	I0116 03:07:32.097372  491150 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-gg8kv" in "kube-system" namespace to be "Ready" ...
	I0116 03:07:32.228774  491150 request.go:629] Waited for 131.322806ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gg8kv
	I0116 03:07:32.228874  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gg8kv
	I0116 03:07:32.228881  491150 round_trippers.go:469] Request Headers:
	I0116 03:07:32.228889  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:07:32.228899  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:07:32.233292  491150 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 03:07:32.233316  491150 round_trippers.go:577] Response Headers:
	I0116 03:07:32.233324  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:07:32.233329  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:07:32.233334  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:07:32 GMT
	I0116 03:07:32.233347  491150 round_trippers.go:580]     Audit-Id: e2518a64-ff27-47db-8737-29870a0d9cba
	I0116 03:07:32.233357  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:07:32.233368  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:07:32.233533  491150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-gg8kv","generateName":"kube-proxy-","namespace":"kube-system","uid":"32841b88-1b06-46ed-b4ce-f73301ec0a85","resourceVersion":"838","creationTimestamp":"2024-01-16T02:57:23Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"fdad84fd-2af6-4360-a899-0f70f257935c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fdad84fd-2af6-4360-a899-0f70f257935c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5514 chars]
	I0116 03:07:32.429353  491150 request.go:629] Waited for 195.368163ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/nodes/multinode-405494
	I0116 03:07:32.429431  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494
	I0116 03:07:32.429438  491150 round_trippers.go:469] Request Headers:
	I0116 03:07:32.429446  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:07:32.429455  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:07:32.433541  491150 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 03:07:32.433572  491150 round_trippers.go:577] Response Headers:
	I0116 03:07:32.433584  491150 round_trippers.go:580]     Audit-Id: 6599d5f3-4989-4746-acae-5e154186307f
	I0116 03:07:32.433592  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:07:32.433600  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:07:32.433608  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:07:32.433616  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:07:32.433624  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:07:32 GMT
	I0116 03:07:32.433813  491150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","resourceVersion":"732","creationTimestamp":"2024-01-16T02:57:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_57_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:57:07Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0116 03:07:32.434261  491150 pod_ready.go:97] node "multinode-405494" hosting pod "kube-proxy-gg8kv" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-405494" has status "Ready":"False"
	I0116 03:07:32.434292  491150 pod_ready.go:81] duration metric: took 336.911159ms waiting for pod "kube-proxy-gg8kv" in "kube-system" namespace to be "Ready" ...
	E0116 03:07:32.434309  491150 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-405494" hosting pod "kube-proxy-gg8kv" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-405494" has status "Ready":"False"
	I0116 03:07:32.434319  491150 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-ghscp" in "kube-system" namespace to be "Ready" ...
	I0116 03:07:32.629355  491150 request.go:629] Waited for 194.918172ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ghscp
	I0116 03:07:32.629427  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ghscp
	I0116 03:07:32.629433  491150 round_trippers.go:469] Request Headers:
	I0116 03:07:32.629441  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:07:32.629451  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:07:32.632601  491150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:07:32.632631  491150 round_trippers.go:577] Response Headers:
	I0116 03:07:32.632639  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:07:32.632645  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:07:32 GMT
	I0116 03:07:32.632651  491150 round_trippers.go:580]     Audit-Id: 1bae2f17-9e63-4a1a-ab09-b8391db7f92e
	I0116 03:07:32.632664  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:07:32.632669  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:07:32.632675  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:07:32.632844  491150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-ghscp","generateName":"kube-proxy-","namespace":"kube-system","uid":"62b6191a-df8d-444d-9176-3f265fd2084d","resourceVersion":"708","creationTimestamp":"2024-01-16T02:58:49Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"fdad84fd-2af6-4360-a899-0f70f257935c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:58:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fdad84fd-2af6-4360-a899-0f70f257935c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I0116 03:07:32.828757  491150 request.go:629] Waited for 195.401681ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/nodes/multinode-405494-m03
	I0116 03:07:32.828879  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494-m03
	I0116 03:07:32.828900  491150 round_trippers.go:469] Request Headers:
	I0116 03:07:32.828913  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:07:32.828934  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:07:32.831783  491150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:07:32.831809  491150 round_trippers.go:577] Response Headers:
	I0116 03:07:32.831825  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:07:32 GMT
	I0116 03:07:32.831836  491150 round_trippers.go:580]     Audit-Id: ef169b51-8bad-4176-9acf-04731c69a8b6
	I0116 03:07:32.831844  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:07:32.831853  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:07:32.831860  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:07:32.831868  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:07:32.832064  491150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494-m03","uid":"f017bb37-2198-45f8-8920-a0a10585c3e0","resourceVersion":"720","creationTimestamp":"2024-01-16T02:59:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_59_33_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:59:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 3516 chars]
	I0116 03:07:32.832399  491150 pod_ready.go:92] pod "kube-proxy-ghscp" in "kube-system" namespace has status "Ready":"True"
	I0116 03:07:32.832422  491150 pod_ready.go:81] duration metric: took 398.092287ms waiting for pod "kube-proxy-ghscp" in "kube-system" namespace to be "Ready" ...
	I0116 03:07:32.832436  491150 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-m46rb" in "kube-system" namespace to be "Ready" ...
	I0116 03:07:33.028454  491150 request.go:629] Waited for 195.929144ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m46rb
	I0116 03:07:33.028523  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m46rb
	I0116 03:07:33.028528  491150 round_trippers.go:469] Request Headers:
	I0116 03:07:33.028536  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:07:33.028542  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:07:33.031391  491150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:07:33.031419  491150 round_trippers.go:577] Response Headers:
	I0116 03:07:33.031427  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:07:33.031432  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:07:33.031437  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:07:33 GMT
	I0116 03:07:33.031442  491150 round_trippers.go:580]     Audit-Id: 7923620d-3d1c-41e0-a03c-0b0228aab2dc
	I0116 03:07:33.031447  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:07:33.031452  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:07:33.031722  491150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-m46rb","generateName":"kube-proxy-","namespace":"kube-system","uid":"960fb4d4-836f-42c5-9d56-03daae9f5a12","resourceVersion":"501","creationTimestamp":"2024-01-16T02:58:02Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"fdad84fd-2af6-4360-a899-0f70f257935c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:58:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fdad84fd-2af6-4360-a899-0f70f257935c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5522 chars]
	I0116 03:07:33.228634  491150 request.go:629] Waited for 196.387526ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/nodes/multinode-405494-m02
	I0116 03:07:33.228733  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494-m02
	I0116 03:07:33.228738  491150 round_trippers.go:469] Request Headers:
	I0116 03:07:33.228746  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:07:33.228755  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:07:33.231590  491150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:07:33.231624  491150 round_trippers.go:577] Response Headers:
	I0116 03:07:33.231636  491150 round_trippers.go:580]     Audit-Id: ac4404b8-8922-487b-a48a-2848f2b89737
	I0116 03:07:33.231644  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:07:33.231656  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:07:33.231664  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:07:33.231673  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:07:33.231681  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:07:33 GMT
	I0116 03:07:33.231826  491150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494-m02","uid":"db05f602-4c14-49d7-93c1-517732722bbd","resourceVersion":"699","creationTimestamp":"2024-01-16T02:58:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_59_33_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:58:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 4235 chars]
	I0116 03:07:33.232233  491150 pod_ready.go:92] pod "kube-proxy-m46rb" in "kube-system" namespace has status "Ready":"True"
	I0116 03:07:33.232267  491150 pod_ready.go:81] duration metric: took 399.822215ms waiting for pod "kube-proxy-m46rb" in "kube-system" namespace to be "Ready" ...
	I0116 03:07:33.232281  491150 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-405494" in "kube-system" namespace to be "Ready" ...
	I0116 03:07:33.429201  491150 request.go:629] Waited for 196.817088ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-405494
	I0116 03:07:33.429298  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-405494
	I0116 03:07:33.429306  491150 round_trippers.go:469] Request Headers:
	I0116 03:07:33.429323  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:07:33.429335  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:07:33.432649  491150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:07:33.432679  491150 round_trippers.go:577] Response Headers:
	I0116 03:07:33.432690  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:07:33.432699  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:07:33.432707  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:07:33.432721  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:07:33.432730  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:07:33 GMT
	I0116 03:07:33.432738  491150 round_trippers.go:580]     Audit-Id: f85d9cdf-b78d-4da3-b226-9aad07c98a7e
	I0116 03:07:33.432903  491150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-405494","namespace":"kube-system","uid":"70c980cb-4ff9-45f5-960f-d8afa355229c","resourceVersion":"793","creationTimestamp":"2024-01-16T02:57:11Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"65069d20830c0b10a3d28746871e48c2","kubernetes.io/config.mirror":"65069d20830c0b10a3d28746871e48c2","kubernetes.io/config.seen":"2024-01-16T02:57:02.078604553Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4924 chars]
	I0116 03:07:33.628345  491150 request.go:629] Waited for 194.959521ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/nodes/multinode-405494
	I0116 03:07:33.628421  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494
	I0116 03:07:33.628428  491150 round_trippers.go:469] Request Headers:
	I0116 03:07:33.628439  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:07:33.628447  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:07:33.631041  491150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:07:33.631074  491150 round_trippers.go:577] Response Headers:
	I0116 03:07:33.631087  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:07:33 GMT
	I0116 03:07:33.631096  491150 round_trippers.go:580]     Audit-Id: 649a9328-2f5a-4313-ba9e-b8d0c07dfbe8
	I0116 03:07:33.631108  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:07:33.631118  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:07:33.631130  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:07:33.631137  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:07:33.631676  491150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","resourceVersion":"732","creationTimestamp":"2024-01-16T02:57:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_57_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:57:07Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0116 03:07:33.632187  491150 pod_ready.go:97] node "multinode-405494" hosting pod "kube-scheduler-multinode-405494" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-405494" has status "Ready":"False"
	I0116 03:07:33.632221  491150 pod_ready.go:81] duration metric: took 399.922666ms waiting for pod "kube-scheduler-multinode-405494" in "kube-system" namespace to be "Ready" ...
	E0116 03:07:33.632235  491150 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-405494" hosting pod "kube-scheduler-multinode-405494" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-405494" has status "Ready":"False"
	I0116 03:07:33.632256  491150 pod_ready.go:38] duration metric: took 1.596846233s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:07:33.632278  491150 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0116 03:07:33.653681  491150 command_runner.go:130] > -16
	I0116 03:07:33.654259  491150 ops.go:34] apiserver oom_adj: -16
	I0116 03:07:33.654270  491150 kubeadm.go:640] restartCluster took 22.217247723s
	I0116 03:07:33.654278  491150 kubeadm.go:406] StartCluster complete in 22.26760039s
	I0116 03:07:33.654306  491150 settings.go:142] acquiring lock: {Name:mk3ded99c06d285b174e76622bb0741c8dd2d8fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:07:33.654380  491150 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17965-468241/kubeconfig
	I0116 03:07:33.655077  491150 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-468241/kubeconfig: {Name:mkac3c63bbcba9b4fa1bea3480797757d703fb9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:07:33.655277  491150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0116 03:07:33.655428  491150 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0116 03:07:33.657361  491150 out.go:177] * Enabled addons: 
	I0116 03:07:33.655583  491150 config.go:182] Loaded profile config "multinode-405494": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 03:07:33.655624  491150 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17965-468241/kubeconfig
	I0116 03:07:33.658907  491150 addons.go:505] enable addons completed in 3.476445ms: enabled=[]
	I0116 03:07:33.657668  491150 kapi.go:59] client config for multinode-405494: &rest.Config{Host:"https://192.168.39.70:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17965-468241/.minikube/profiles/multinode-405494/client.crt", KeyFile:"/home/jenkins/minikube-integration/17965-468241/.minikube/profiles/multinode-405494/client.key", CAFile:"/home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19be0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0116 03:07:33.659290  491150 round_trippers.go:463] GET https://192.168.39.70:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0116 03:07:33.659304  491150 round_trippers.go:469] Request Headers:
	I0116 03:07:33.659315  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:07:33.659323  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:07:33.663119  491150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:07:33.663142  491150 round_trippers.go:577] Response Headers:
	I0116 03:07:33.663151  491150 round_trippers.go:580]     Content-Length: 291
	I0116 03:07:33.663160  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:07:33 GMT
	I0116 03:07:33.663168  491150 round_trippers.go:580]     Audit-Id: 612425ee-5e4d-414b-9da3-8974e4e68385
	I0116 03:07:33.663179  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:07:33.663188  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:07:33.663214  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:07:33.663236  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:07:33.663316  491150 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"dd77c785-c90f-4789-97cb-f593b7a7a7e2","resourceVersion":"832","creationTimestamp":"2024-01-16T02:57:11Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0116 03:07:33.663513  491150 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-405494" context rescaled to 1 replicas
	I0116 03:07:33.663563  491150 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.70 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 03:07:33.665498  491150 out.go:177] * Verifying Kubernetes components...
	I0116 03:07:33.668292  491150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:07:33.769481  491150 command_runner.go:130] > apiVersion: v1
	I0116 03:07:33.769505  491150 command_runner.go:130] > data:
	I0116 03:07:33.769509  491150 command_runner.go:130] >   Corefile: |
	I0116 03:07:33.769513  491150 command_runner.go:130] >     .:53 {
	I0116 03:07:33.769517  491150 command_runner.go:130] >         log
	I0116 03:07:33.769523  491150 command_runner.go:130] >         errors
	I0116 03:07:33.769527  491150 command_runner.go:130] >         health {
	I0116 03:07:33.769534  491150 command_runner.go:130] >            lameduck 5s
	I0116 03:07:33.769537  491150 command_runner.go:130] >         }
	I0116 03:07:33.769542  491150 command_runner.go:130] >         ready
	I0116 03:07:33.769548  491150 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0116 03:07:33.769552  491150 command_runner.go:130] >            pods insecure
	I0116 03:07:33.769558  491150 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0116 03:07:33.769562  491150 command_runner.go:130] >            ttl 30
	I0116 03:07:33.769567  491150 command_runner.go:130] >         }
	I0116 03:07:33.769578  491150 command_runner.go:130] >         prometheus :9153
	I0116 03:07:33.769587  491150 command_runner.go:130] >         hosts {
	I0116 03:07:33.769595  491150 command_runner.go:130] >            192.168.39.1 host.minikube.internal
	I0116 03:07:33.769600  491150 command_runner.go:130] >            fallthrough
	I0116 03:07:33.769605  491150 command_runner.go:130] >         }
	I0116 03:07:33.769610  491150 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0116 03:07:33.769617  491150 command_runner.go:130] >            max_concurrent 1000
	I0116 03:07:33.769620  491150 command_runner.go:130] >         }
	I0116 03:07:33.769625  491150 command_runner.go:130] >         cache 30
	I0116 03:07:33.769643  491150 command_runner.go:130] >         loop
	I0116 03:07:33.769655  491150 command_runner.go:130] >         reload
	I0116 03:07:33.769660  491150 command_runner.go:130] >         loadbalance
	I0116 03:07:33.769663  491150 command_runner.go:130] >     }
	I0116 03:07:33.769667  491150 command_runner.go:130] > kind: ConfigMap
	I0116 03:07:33.769671  491150 command_runner.go:130] > metadata:
	I0116 03:07:33.769676  491150 command_runner.go:130] >   creationTimestamp: "2024-01-16T02:57:11Z"
	I0116 03:07:33.769679  491150 command_runner.go:130] >   name: coredns
	I0116 03:07:33.769686  491150 command_runner.go:130] >   namespace: kube-system
	I0116 03:07:33.769695  491150 command_runner.go:130] >   resourceVersion: "394"
	I0116 03:07:33.769706  491150 command_runner.go:130] >   uid: 10412523-6dfe-4aad-b001-dd354ac18003
	I0116 03:07:33.773594  491150 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0116 03:07:33.773654  491150 node_ready.go:35] waiting up to 6m0s for node "multinode-405494" to be "Ready" ...
	I0116 03:07:33.828995  491150 request.go:629] Waited for 55.19405ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/nodes/multinode-405494
	I0116 03:07:33.829104  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494
	I0116 03:07:33.829118  491150 round_trippers.go:469] Request Headers:
	I0116 03:07:33.829130  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:07:33.829141  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:07:33.831753  491150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:07:33.831771  491150 round_trippers.go:577] Response Headers:
	I0116 03:07:33.831779  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:07:33 GMT
	I0116 03:07:33.831787  491150 round_trippers.go:580]     Audit-Id: 7780e7af-d6e6-44cc-bd04-04efe0b0ec6c
	I0116 03:07:33.831795  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:07:33.831801  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:07:33.831809  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:07:33.831817  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:07:33.832219  491150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","resourceVersion":"732","creationTimestamp":"2024-01-16T02:57:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_57_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:57:07Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0116 03:07:34.274924  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494
	I0116 03:07:34.274954  491150 round_trippers.go:469] Request Headers:
	I0116 03:07:34.274967  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:07:34.274974  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:07:34.277776  491150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:07:34.277810  491150 round_trippers.go:577] Response Headers:
	I0116 03:07:34.277819  491150 round_trippers.go:580]     Audit-Id: 5ea45dc3-48bf-4839-9afc-ec41c43d9c6c
	I0116 03:07:34.277824  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:07:34.277829  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:07:34.277835  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:07:34.277840  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:07:34.277845  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:07:34 GMT
	I0116 03:07:34.278092  491150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","resourceVersion":"732","creationTimestamp":"2024-01-16T02:57:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_57_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:57:07Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0116 03:07:34.774836  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494
	I0116 03:07:34.774873  491150 round_trippers.go:469] Request Headers:
	I0116 03:07:34.774885  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:07:34.774894  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:07:34.777717  491150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:07:34.777742  491150 round_trippers.go:577] Response Headers:
	I0116 03:07:34.777764  491150 round_trippers.go:580]     Audit-Id: 09733d7d-2cea-4870-854f-0fb9ac0d251c
	I0116 03:07:34.777772  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:07:34.777783  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:07:34.777793  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:07:34.777802  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:07:34.777814  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:07:34 GMT
	I0116 03:07:34.778170  491150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","resourceVersion":"732","creationTimestamp":"2024-01-16T02:57:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_57_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:57:07Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0116 03:07:35.274933  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494
	I0116 03:07:35.274970  491150 round_trippers.go:469] Request Headers:
	I0116 03:07:35.274983  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:07:35.274992  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:07:35.277957  491150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:07:35.277989  491150 round_trippers.go:577] Response Headers:
	I0116 03:07:35.278001  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:07:35.278026  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:07:35.278035  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:07:35.278043  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:07:35.278055  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:07:35 GMT
	I0116 03:07:35.278067  491150 round_trippers.go:580]     Audit-Id: 3a9131fe-2497-40c2-811c-e61ccf5656b0
	I0116 03:07:35.278315  491150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","resourceVersion":"732","creationTimestamp":"2024-01-16T02:57:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_57_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:57:07Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0116 03:07:35.774743  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494
	I0116 03:07:35.774778  491150 round_trippers.go:469] Request Headers:
	I0116 03:07:35.774789  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:07:35.774798  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:07:35.778118  491150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:07:35.778146  491150 round_trippers.go:577] Response Headers:
	I0116 03:07:35.778156  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:07:35.778166  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:07:35 GMT
	I0116 03:07:35.778174  491150 round_trippers.go:580]     Audit-Id: 73cd1639-b591-45f3-82c3-15ec96f6ff98
	I0116 03:07:35.778183  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:07:35.778191  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:07:35.778198  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:07:35.778480  491150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","resourceVersion":"732","creationTimestamp":"2024-01-16T02:57:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_57_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:57:07Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0116 03:07:35.778927  491150 node_ready.go:58] node "multinode-405494" has status "Ready":"False"
	I0116 03:07:36.274151  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494
	I0116 03:07:36.274177  491150 round_trippers.go:469] Request Headers:
	I0116 03:07:36.274186  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:07:36.274192  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:07:36.277149  491150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:07:36.277173  491150 round_trippers.go:577] Response Headers:
	I0116 03:07:36.277181  491150 round_trippers.go:580]     Audit-Id: 878dd6d8-644d-4e6c-bfba-13f96bf6645b
	I0116 03:07:36.277187  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:07:36.277192  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:07:36.277197  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:07:36.277202  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:07:36.277207  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:07:36 GMT
	I0116 03:07:36.277539  491150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","resourceVersion":"732","creationTimestamp":"2024-01-16T02:57:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_57_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:57:07Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0116 03:07:36.774769  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494
	I0116 03:07:36.774801  491150 round_trippers.go:469] Request Headers:
	I0116 03:07:36.774810  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:07:36.774817  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:07:36.777860  491150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:07:36.777885  491150 round_trippers.go:577] Response Headers:
	I0116 03:07:36.777895  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:07:36.777904  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:07:36 GMT
	I0116 03:07:36.777913  491150 round_trippers.go:580]     Audit-Id: 71f74fc0-07c6-498d-8f5c-444a457bccd2
	I0116 03:07:36.777922  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:07:36.777930  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:07:36.777939  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:07:36.778349  491150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","resourceVersion":"732","creationTimestamp":"2024-01-16T02:57:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_57_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:57:07Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0116 03:07:37.274008  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494
	I0116 03:07:37.274051  491150 round_trippers.go:469] Request Headers:
	I0116 03:07:37.274062  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:07:37.274068  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:07:37.277309  491150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:07:37.277338  491150 round_trippers.go:577] Response Headers:
	I0116 03:07:37.277350  491150 round_trippers.go:580]     Audit-Id: 7c87f910-3173-4dbf-a45f-50df3a55babe
	I0116 03:07:37.277357  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:07:37.277365  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:07:37.277373  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:07:37.277382  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:07:37.277390  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:07:37 GMT
	I0116 03:07:37.277619  491150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","resourceVersion":"732","creationTimestamp":"2024-01-16T02:57:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_57_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:57:07Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0116 03:07:37.774235  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494
	I0116 03:07:37.774265  491150 round_trippers.go:469] Request Headers:
	I0116 03:07:37.774275  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:07:37.774281  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:07:37.777386  491150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:07:37.777416  491150 round_trippers.go:577] Response Headers:
	I0116 03:07:37.777428  491150 round_trippers.go:580]     Audit-Id: c5d903cc-94b8-4af6-a659-ad2c90f084e6
	I0116 03:07:37.777437  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:07:37.777445  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:07:37.777453  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:07:37.777461  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:07:37.777470  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:07:37 GMT
	I0116 03:07:37.777621  491150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","resourceVersion":"732","creationTimestamp":"2024-01-16T02:57:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_57_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:57:07Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0116 03:07:38.274838  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494
	I0116 03:07:38.274867  491150 round_trippers.go:469] Request Headers:
	I0116 03:07:38.274879  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:07:38.274888  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:07:38.278590  491150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:07:38.278618  491150 round_trippers.go:577] Response Headers:
	I0116 03:07:38.278629  491150 round_trippers.go:580]     Audit-Id: 49f09ff7-b136-454c-9ef0-0a0fb038cb6e
	I0116 03:07:38.278644  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:07:38.278651  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:07:38.278658  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:07:38.278667  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:07:38.278674  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:07:38 GMT
	I0116 03:07:38.279363  491150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","resourceVersion":"732","creationTimestamp":"2024-01-16T02:57:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_57_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:57:07Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0116 03:07:38.279812  491150 node_ready.go:58] node "multinode-405494" has status "Ready":"False"
	I0116 03:07:38.773860  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494
	I0116 03:07:38.773886  491150 round_trippers.go:469] Request Headers:
	I0116 03:07:38.773895  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:07:38.773901  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:07:38.777132  491150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:07:38.777161  491150 round_trippers.go:577] Response Headers:
	I0116 03:07:38.777172  491150 round_trippers.go:580]     Audit-Id: 9a448e98-3b24-4930-82bc-bb7d7ea76a4b
	I0116 03:07:38.777181  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:07:38.777189  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:07:38.777198  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:07:38.777206  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:07:38.777218  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:07:38 GMT
	I0116 03:07:38.777434  491150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","resourceVersion":"732","creationTimestamp":"2024-01-16T02:57:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_57_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:57:07Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0116 03:07:39.274518  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494
	I0116 03:07:39.274545  491150 round_trippers.go:469] Request Headers:
	I0116 03:07:39.274553  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:07:39.274559  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:07:39.277562  491150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:07:39.277583  491150 round_trippers.go:577] Response Headers:
	I0116 03:07:39.277593  491150 round_trippers.go:580]     Audit-Id: 73cc7e27-ae25-4958-935e-d193759d91ce
	I0116 03:07:39.277602  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:07:39.277612  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:07:39.277621  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:07:39.277629  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:07:39.277640  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:07:39 GMT
	I0116 03:07:39.277856  491150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","resourceVersion":"861","creationTimestamp":"2024-01-16T02:57:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_57_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:57:07Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0116 03:07:39.278342  491150 node_ready.go:49] node "multinode-405494" has status "Ready":"True"
	I0116 03:07:39.278367  491150 node_ready.go:38] duration metric: took 5.504679824s waiting for node "multinode-405494" to be "Ready" ...
	I0116 03:07:39.278380  491150 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:07:39.278476  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods
	I0116 03:07:39.278490  491150 round_trippers.go:469] Request Headers:
	I0116 03:07:39.278501  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:07:39.278510  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:07:39.282106  491150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:07:39.282129  491150 round_trippers.go:577] Response Headers:
	I0116 03:07:39.282140  491150 round_trippers.go:580]     Audit-Id: 93423008-d631-4f43-8845-217fe42943c9
	I0116 03:07:39.282153  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:07:39.282170  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:07:39.282192  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:07:39.282203  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:07:39.282211  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:07:39 GMT
	I0116 03:07:39.283625  491150 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"861"},"items":[{"metadata":{"name":"coredns-5dd5756b68-vwqvk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"096151e2-c59c-4dcf-bd29-2029901902c9","resourceVersion":"798","creationTimestamp":"2024-01-16T02:57:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c3967b3a-de7c-4ccd-af3a-dd2a9e8b71f8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c3967b3a-de7c-4ccd-af3a-dd2a9e8b71f8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 83533 chars]
	I0116 03:07:39.286219  491150 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-vwqvk" in "kube-system" namespace to be "Ready" ...
	I0116 03:07:39.286325  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-vwqvk
	I0116 03:07:39.286335  491150 round_trippers.go:469] Request Headers:
	I0116 03:07:39.286342  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:07:39.286350  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:07:39.289535  491150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:07:39.289553  491150 round_trippers.go:577] Response Headers:
	I0116 03:07:39.289560  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:07:39.289568  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:07:39.289580  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:07:39 GMT
	I0116 03:07:39.289592  491150 round_trippers.go:580]     Audit-Id: f22d7a78-8191-478e-9c3e-1abce0834a40
	I0116 03:07:39.289603  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:07:39.289609  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:07:39.290114  491150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-vwqvk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"096151e2-c59c-4dcf-bd29-2029901902c9","resourceVersion":"798","creationTimestamp":"2024-01-16T02:57:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c3967b3a-de7c-4ccd-af3a-dd2a9e8b71f8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c3967b3a-de7c-4ccd-af3a-dd2a9e8b71f8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I0116 03:07:39.290565  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494
	I0116 03:07:39.290579  491150 round_trippers.go:469] Request Headers:
	I0116 03:07:39.290586  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:07:39.290592  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:07:39.293696  491150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:07:39.293718  491150 round_trippers.go:577] Response Headers:
	I0116 03:07:39.293726  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:07:39 GMT
	I0116 03:07:39.293733  491150 round_trippers.go:580]     Audit-Id: 60de6382-3fac-4300-9c8d-3002a3f86b7e
	I0116 03:07:39.293742  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:07:39.293748  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:07:39.293760  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:07:39.293768  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:07:39.294341  491150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","resourceVersion":"861","creationTimestamp":"2024-01-16T02:57:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_57_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:57:07Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0116 03:07:39.787076  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-vwqvk
	I0116 03:07:39.787108  491150 round_trippers.go:469] Request Headers:
	I0116 03:07:39.787117  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:07:39.787123  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:07:39.790029  491150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:07:39.790066  491150 round_trippers.go:577] Response Headers:
	I0116 03:07:39.790076  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:07:39.790084  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:07:39.790096  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:07:39.790119  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:07:39 GMT
	I0116 03:07:39.790126  491150 round_trippers.go:580]     Audit-Id: ed945552-dd7e-4586-976f-62e0dcc664d8
	I0116 03:07:39.790131  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:07:39.790434  491150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-vwqvk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"096151e2-c59c-4dcf-bd29-2029901902c9","resourceVersion":"798","creationTimestamp":"2024-01-16T02:57:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c3967b3a-de7c-4ccd-af3a-dd2a9e8b71f8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c3967b3a-de7c-4ccd-af3a-dd2a9e8b71f8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I0116 03:07:39.790910  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494
	I0116 03:07:39.790926  491150 round_trippers.go:469] Request Headers:
	I0116 03:07:39.790933  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:07:39.790942  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:07:39.793441  491150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:07:39.793459  491150 round_trippers.go:577] Response Headers:
	I0116 03:07:39.793468  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:07:39.793477  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:07:39.793499  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:07:39.793513  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:07:39 GMT
	I0116 03:07:39.793518  491150 round_trippers.go:580]     Audit-Id: 55090f06-561b-44e8-bc59-706f551ae25b
	I0116 03:07:39.793523  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:07:39.793652  491150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","resourceVersion":"861","creationTimestamp":"2024-01-16T02:57:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_57_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:57:07Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0116 03:07:40.287373  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-vwqvk
	I0116 03:07:40.287404  491150 round_trippers.go:469] Request Headers:
	I0116 03:07:40.287412  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:07:40.287419  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:07:40.290832  491150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:07:40.290856  491150 round_trippers.go:577] Response Headers:
	I0116 03:07:40.290864  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:07:40.290871  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:07:40.290879  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:07:40.290889  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:07:40.290902  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:07:40 GMT
	I0116 03:07:40.290911  491150 round_trippers.go:580]     Audit-Id: 1ccabf15-8e6f-4e4d-baee-ca54a492cb24
	I0116 03:07:40.291733  491150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-vwqvk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"096151e2-c59c-4dcf-bd29-2029901902c9","resourceVersion":"798","creationTimestamp":"2024-01-16T02:57:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c3967b3a-de7c-4ccd-af3a-dd2a9e8b71f8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c3967b3a-de7c-4ccd-af3a-dd2a9e8b71f8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I0116 03:07:40.292229  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494
	I0116 03:07:40.292246  491150 round_trippers.go:469] Request Headers:
	I0116 03:07:40.292253  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:07:40.292259  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:07:40.294988  491150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:07:40.295017  491150 round_trippers.go:577] Response Headers:
	I0116 03:07:40.295028  491150 round_trippers.go:580]     Audit-Id: 65fc4960-69df-4343-8050-7a3ba2798e96
	I0116 03:07:40.295033  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:07:40.295038  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:07:40.295043  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:07:40.295048  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:07:40.295055  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:07:40 GMT
	I0116 03:07:40.295176  491150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","resourceVersion":"861","creationTimestamp":"2024-01-16T02:57:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_57_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:57:07Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0116 03:07:40.786597  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-vwqvk
	I0116 03:07:40.786650  491150 round_trippers.go:469] Request Headers:
	I0116 03:07:40.786660  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:07:40.786667  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:07:40.789650  491150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:07:40.789688  491150 round_trippers.go:577] Response Headers:
	I0116 03:07:40.789699  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:07:40.789707  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:07:40.789714  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:07:40.789727  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:07:40 GMT
	I0116 03:07:40.789738  491150 round_trippers.go:580]     Audit-Id: 3a232f32-de20-4855-b649-210513fc2a28
	I0116 03:07:40.789746  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:07:40.789891  491150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-vwqvk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"096151e2-c59c-4dcf-bd29-2029901902c9","resourceVersion":"798","creationTimestamp":"2024-01-16T02:57:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c3967b3a-de7c-4ccd-af3a-dd2a9e8b71f8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c3967b3a-de7c-4ccd-af3a-dd2a9e8b71f8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I0116 03:07:40.790433  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494
	I0116 03:07:40.790452  491150 round_trippers.go:469] Request Headers:
	I0116 03:07:40.790464  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:07:40.790473  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:07:40.793126  491150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:07:40.793148  491150 round_trippers.go:577] Response Headers:
	I0116 03:07:40.793158  491150 round_trippers.go:580]     Audit-Id: ebea3006-6465-4a8c-a626-83cdaa874d29
	I0116 03:07:40.793166  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:07:40.793172  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:07:40.793179  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:07:40.793187  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:07:40.793197  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:07:40 GMT
	I0116 03:07:40.793532  491150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","resourceVersion":"861","creationTimestamp":"2024-01-16T02:57:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_57_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:57:07Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0116 03:07:41.287292  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-vwqvk
	I0116 03:07:41.287327  491150 round_trippers.go:469] Request Headers:
	I0116 03:07:41.287340  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:07:41.287351  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:07:41.290953  491150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:07:41.290985  491150 round_trippers.go:577] Response Headers:
	I0116 03:07:41.290997  491150 round_trippers.go:580]     Audit-Id: f1bd3796-d4d5-4021-8d95-d9e27cc82128
	I0116 03:07:41.291006  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:07:41.291015  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:07:41.291028  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:07:41.291036  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:07:41.291046  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:07:41 GMT
	I0116 03:07:41.291292  491150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-vwqvk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"096151e2-c59c-4dcf-bd29-2029901902c9","resourceVersion":"798","creationTimestamp":"2024-01-16T02:57:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c3967b3a-de7c-4ccd-af3a-dd2a9e8b71f8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c3967b3a-de7c-4ccd-af3a-dd2a9e8b71f8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I0116 03:07:41.291932  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494
	I0116 03:07:41.291958  491150 round_trippers.go:469] Request Headers:
	I0116 03:07:41.291972  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:07:41.291985  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:07:41.299363  491150 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0116 03:07:41.299381  491150 round_trippers.go:577] Response Headers:
	I0116 03:07:41.299389  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:07:41.299394  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:07:41.299399  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:07:41.299404  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:07:41.299411  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:07:41 GMT
	I0116 03:07:41.299420  491150 round_trippers.go:580]     Audit-Id: 263c14d2-15e2-4192-b380-24f1c22b80a2
	I0116 03:07:41.299952  491150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","resourceVersion":"861","creationTimestamp":"2024-01-16T02:57:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_57_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:57:07Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0116 03:07:41.300390  491150 pod_ready.go:102] pod "coredns-5dd5756b68-vwqvk" in "kube-system" namespace has status "Ready":"False"
	I0116 03:07:41.787540  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-vwqvk
	I0116 03:07:41.787575  491150 round_trippers.go:469] Request Headers:
	I0116 03:07:41.787587  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:07:41.787597  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:07:41.790109  491150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:07:41.790139  491150 round_trippers.go:577] Response Headers:
	I0116 03:07:41.790150  491150 round_trippers.go:580]     Audit-Id: b1828473-33e5-453f-b22d-ac0d64259357
	I0116 03:07:41.790157  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:07:41.790165  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:07:41.790172  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:07:41.790179  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:07:41.790188  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:07:41 GMT
	I0116 03:07:41.790339  491150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-vwqvk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"096151e2-c59c-4dcf-bd29-2029901902c9","resourceVersion":"798","creationTimestamp":"2024-01-16T02:57:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c3967b3a-de7c-4ccd-af3a-dd2a9e8b71f8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c3967b3a-de7c-4ccd-af3a-dd2a9e8b71f8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I0116 03:07:41.790941  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494
	I0116 03:07:41.790959  491150 round_trippers.go:469] Request Headers:
	I0116 03:07:41.790970  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:07:41.790979  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:07:41.793205  491150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:07:41.793226  491150 round_trippers.go:577] Response Headers:
	I0116 03:07:41.793233  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:07:41.793242  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:07:41 GMT
	I0116 03:07:41.793255  491150 round_trippers.go:580]     Audit-Id: ef352970-f37e-4882-9115-fea7e19841db
	I0116 03:07:41.793264  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:07:41.793272  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:07:41.793283  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:07:41.793459  491150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","resourceVersion":"861","creationTimestamp":"2024-01-16T02:57:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_57_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:57:07Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0116 03:07:42.287191  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-vwqvk
	I0116 03:07:42.287218  491150 round_trippers.go:469] Request Headers:
	I0116 03:07:42.287227  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:07:42.287233  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:07:42.290221  491150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:07:42.290246  491150 round_trippers.go:577] Response Headers:
	I0116 03:07:42.290256  491150 round_trippers.go:580]     Audit-Id: ae66fc96-5fb6-469d-b884-5a207725e39c
	I0116 03:07:42.290265  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:07:42.290275  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:07:42.290282  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:07:42.290297  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:07:42.290305  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:07:42 GMT
	I0116 03:07:42.290874  491150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-vwqvk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"096151e2-c59c-4dcf-bd29-2029901902c9","resourceVersion":"798","creationTimestamp":"2024-01-16T02:57:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c3967b3a-de7c-4ccd-af3a-dd2a9e8b71f8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c3967b3a-de7c-4ccd-af3a-dd2a9e8b71f8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I0116 03:07:42.291443  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494
	I0116 03:07:42.291462  491150 round_trippers.go:469] Request Headers:
	I0116 03:07:42.291474  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:07:42.291501  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:07:42.293793  491150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:07:42.293811  491150 round_trippers.go:577] Response Headers:
	I0116 03:07:42.293818  491150 round_trippers.go:580]     Audit-Id: 177e3bac-09d8-47ac-866b-a5e74eaffb29
	I0116 03:07:42.293824  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:07:42.293832  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:07:42.293841  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:07:42.293849  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:07:42.293861  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:07:42 GMT
	I0116 03:07:42.294214  491150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","resourceVersion":"861","creationTimestamp":"2024-01-16T02:57:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_57_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:57:07Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0116 03:07:42.786905  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-vwqvk
	I0116 03:07:42.786933  491150 round_trippers.go:469] Request Headers:
	I0116 03:07:42.786942  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:07:42.786948  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:07:42.790110  491150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:07:42.790144  491150 round_trippers.go:577] Response Headers:
	I0116 03:07:42.790155  491150 round_trippers.go:580]     Audit-Id: b909f62d-3043-45ce-b122-0f5b5c2114dd
	I0116 03:07:42.790163  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:07:42.790170  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:07:42.790177  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:07:42.790184  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:07:42.790192  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:07:42 GMT
	I0116 03:07:42.790615  491150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-vwqvk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"096151e2-c59c-4dcf-bd29-2029901902c9","resourceVersion":"798","creationTimestamp":"2024-01-16T02:57:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c3967b3a-de7c-4ccd-af3a-dd2a9e8b71f8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c3967b3a-de7c-4ccd-af3a-dd2a9e8b71f8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I0116 03:07:42.791154  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494
	I0116 03:07:42.791171  491150 round_trippers.go:469] Request Headers:
	I0116 03:07:42.791179  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:07:42.791187  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:07:42.793456  491150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:07:42.793479  491150 round_trippers.go:577] Response Headers:
	I0116 03:07:42.793491  491150 round_trippers.go:580]     Audit-Id: 48611c8f-3ba4-4b34-ae16-6c19d0a3a6cb
	I0116 03:07:42.793497  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:07:42.793502  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:07:42.793507  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:07:42.793512  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:07:42.793517  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:07:42 GMT
	I0116 03:07:42.793824  491150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","resourceVersion":"861","creationTimestamp":"2024-01-16T02:57:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_57_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:57:07Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0116 03:07:43.286468  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-vwqvk
	I0116 03:07:43.286497  491150 round_trippers.go:469] Request Headers:
	I0116 03:07:43.286507  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:07:43.286513  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:07:43.290478  491150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:07:43.290501  491150 round_trippers.go:577] Response Headers:
	I0116 03:07:43.290509  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:07:43.290514  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:07:43.290520  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:07:43.290525  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:07:43 GMT
	I0116 03:07:43.290531  491150 round_trippers.go:580]     Audit-Id: 8d8dabb3-f4a4-471e-b101-d0aa6ec16375
	I0116 03:07:43.290554  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:07:43.291546  491150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-vwqvk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"096151e2-c59c-4dcf-bd29-2029901902c9","resourceVersion":"798","creationTimestamp":"2024-01-16T02:57:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c3967b3a-de7c-4ccd-af3a-dd2a9e8b71f8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c3967b3a-de7c-4ccd-af3a-dd2a9e8b71f8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I0116 03:07:43.292014  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494
	I0116 03:07:43.292027  491150 round_trippers.go:469] Request Headers:
	I0116 03:07:43.292049  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:07:43.292056  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:07:43.295407  491150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:07:43.295427  491150 round_trippers.go:577] Response Headers:
	I0116 03:07:43.295434  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:07:43 GMT
	I0116 03:07:43.295439  491150 round_trippers.go:580]     Audit-Id: 8d9b9a1d-6394-491b-a285-5b416e0cee50
	I0116 03:07:43.295444  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:07:43.295449  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:07:43.295454  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:07:43.295462  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:07:43.295828  491150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","resourceVersion":"861","creationTimestamp":"2024-01-16T02:57:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_57_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:57:07Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0116 03:07:43.786627  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-vwqvk
	I0116 03:07:43.786665  491150 round_trippers.go:469] Request Headers:
	I0116 03:07:43.786677  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:07:43.786685  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:07:43.791905  491150 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0116 03:07:43.791935  491150 round_trippers.go:577] Response Headers:
	I0116 03:07:43.791946  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:07:43.791953  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:07:43.791961  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:07:43.791968  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:07:43 GMT
	I0116 03:07:43.791976  491150 round_trippers.go:580]     Audit-Id: 662e7bed-28ed-4bf7-b474-0787e08bfe40
	I0116 03:07:43.791989  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:07:43.792694  491150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-vwqvk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"096151e2-c59c-4dcf-bd29-2029901902c9","resourceVersion":"798","creationTimestamp":"2024-01-16T02:57:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c3967b3a-de7c-4ccd-af3a-dd2a9e8b71f8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c3967b3a-de7c-4ccd-af3a-dd2a9e8b71f8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I0116 03:07:43.793383  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494
	I0116 03:07:43.793409  491150 round_trippers.go:469] Request Headers:
	I0116 03:07:43.793421  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:07:43.793429  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:07:43.796670  491150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:07:43.796694  491150 round_trippers.go:577] Response Headers:
	I0116 03:07:43.796703  491150 round_trippers.go:580]     Audit-Id: 202914d0-fa73-4976-b18d-873047ce834b
	I0116 03:07:43.796710  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:07:43.796718  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:07:43.796726  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:07:43.796735  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:07:43.796745  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:07:43 GMT
	I0116 03:07:43.796938  491150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","resourceVersion":"861","creationTimestamp":"2024-01-16T02:57:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_57_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:57:07Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0116 03:07:43.797335  491150 pod_ready.go:102] pod "coredns-5dd5756b68-vwqvk" in "kube-system" namespace has status "Ready":"False"
	I0116 03:07:44.287168  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-vwqvk
	I0116 03:07:44.287201  491150 round_trippers.go:469] Request Headers:
	I0116 03:07:44.287217  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:07:44.287227  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:07:44.290919  491150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:07:44.290949  491150 round_trippers.go:577] Response Headers:
	I0116 03:07:44.290960  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:07:44.290968  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:07:44 GMT
	I0116 03:07:44.290975  491150 round_trippers.go:580]     Audit-Id: 14ed7054-5881-4da0-bd1b-45602e2abeb7
	I0116 03:07:44.290982  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:07:44.290989  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:07:44.290996  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:07:44.291285  491150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-vwqvk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"096151e2-c59c-4dcf-bd29-2029901902c9","resourceVersion":"798","creationTimestamp":"2024-01-16T02:57:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c3967b3a-de7c-4ccd-af3a-dd2a9e8b71f8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c3967b3a-de7c-4ccd-af3a-dd2a9e8b71f8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I0116 03:07:44.291931  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494
	I0116 03:07:44.291951  491150 round_trippers.go:469] Request Headers:
	I0116 03:07:44.291959  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:07:44.291966  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:07:44.294366  491150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:07:44.294391  491150 round_trippers.go:577] Response Headers:
	I0116 03:07:44.294400  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:07:44.294408  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:07:44.294415  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:07:44.294422  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:07:44 GMT
	I0116 03:07:44.294429  491150 round_trippers.go:580]     Audit-Id: 5b3de49f-b4b7-45f5-8544-031d395de9f0
	I0116 03:07:44.294437  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:07:44.294825  491150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","resourceVersion":"861","creationTimestamp":"2024-01-16T02:57:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_57_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:57:07Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0116 03:07:44.786505  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-vwqvk
	I0116 03:07:44.786545  491150 round_trippers.go:469] Request Headers:
	I0116 03:07:44.786558  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:07:44.786567  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:07:44.790452  491150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:07:44.790486  491150 round_trippers.go:577] Response Headers:
	I0116 03:07:44.790498  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:07:44.790506  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:07:44 GMT
	I0116 03:07:44.790513  491150 round_trippers.go:580]     Audit-Id: 240c067f-a9c3-460d-beb0-4f415a0141c3
	I0116 03:07:44.790521  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:07:44.790531  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:07:44.790540  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:07:44.790820  491150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-vwqvk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"096151e2-c59c-4dcf-bd29-2029901902c9","resourceVersion":"798","creationTimestamp":"2024-01-16T02:57:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c3967b3a-de7c-4ccd-af3a-dd2a9e8b71f8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c3967b3a-de7c-4ccd-af3a-dd2a9e8b71f8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I0116 03:07:44.791441  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494
	I0116 03:07:44.791462  491150 round_trippers.go:469] Request Headers:
	I0116 03:07:44.791474  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:07:44.791483  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:07:44.794520  491150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:07:44.794545  491150 round_trippers.go:577] Response Headers:
	I0116 03:07:44.794555  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:07:44.794568  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:07:44.794576  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:07:44.794585  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:07:44.794598  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:07:44 GMT
	I0116 03:07:44.794609  491150 round_trippers.go:580]     Audit-Id: 1ca0904b-c941-4ab2-b865-f9340dc88c7d
	I0116 03:07:44.794801  491150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","resourceVersion":"861","creationTimestamp":"2024-01-16T02:57:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_57_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:57:07Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0116 03:07:45.286433  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-vwqvk
	I0116 03:07:45.286481  491150 round_trippers.go:469] Request Headers:
	I0116 03:07:45.286498  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:07:45.286506  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:07:45.289968  491150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:07:45.289993  491150 round_trippers.go:577] Response Headers:
	I0116 03:07:45.290001  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:07:45.290007  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:07:45.290012  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:07:45 GMT
	I0116 03:07:45.290017  491150 round_trippers.go:580]     Audit-Id: ec550bcd-c4c6-4437-a5ef-4ffa64fb459f
	I0116 03:07:45.290022  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:07:45.290027  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:07:45.290290  491150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-vwqvk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"096151e2-c59c-4dcf-bd29-2029901902c9","resourceVersion":"798","creationTimestamp":"2024-01-16T02:57:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c3967b3a-de7c-4ccd-af3a-dd2a9e8b71f8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c3967b3a-de7c-4ccd-af3a-dd2a9e8b71f8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I0116 03:07:45.290760  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494
	I0116 03:07:45.290773  491150 round_trippers.go:469] Request Headers:
	I0116 03:07:45.290781  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:07:45.290787  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:07:45.294251  491150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:07:45.294272  491150 round_trippers.go:577] Response Headers:
	I0116 03:07:45.294282  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:07:45.294289  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:07:45.294296  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:07:45 GMT
	I0116 03:07:45.294304  491150 round_trippers.go:580]     Audit-Id: a7983230-641f-4e96-8c50-8153d7110cbe
	I0116 03:07:45.294312  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:07:45.294319  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:07:45.294771  491150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","resourceVersion":"861","creationTimestamp":"2024-01-16T02:57:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_57_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:57:07Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0116 03:07:45.786524  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-vwqvk
	I0116 03:07:45.786554  491150 round_trippers.go:469] Request Headers:
	I0116 03:07:45.786564  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:07:45.786570  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:07:45.789473  491150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:07:45.789502  491150 round_trippers.go:577] Response Headers:
	I0116 03:07:45.789514  491150 round_trippers.go:580]     Audit-Id: 895c5a14-6035-4984-a396-6d50185deb4c
	I0116 03:07:45.789523  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:07:45.789532  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:07:45.789541  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:07:45.789550  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:07:45.789559  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:07:45 GMT
	I0116 03:07:45.789766  491150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-vwqvk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"096151e2-c59c-4dcf-bd29-2029901902c9","resourceVersion":"798","creationTimestamp":"2024-01-16T02:57:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c3967b3a-de7c-4ccd-af3a-dd2a9e8b71f8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c3967b3a-de7c-4ccd-af3a-dd2a9e8b71f8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I0116 03:07:45.790395  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494
	I0116 03:07:45.790413  491150 round_trippers.go:469] Request Headers:
	I0116 03:07:45.790420  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:07:45.790426  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:07:45.792912  491150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:07:45.792935  491150 round_trippers.go:577] Response Headers:
	I0116 03:07:45.792946  491150 round_trippers.go:580]     Audit-Id: 4105ba96-32f4-4313-8abe-2eea03df38a7
	I0116 03:07:45.792955  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:07:45.792964  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:07:45.792973  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:07:45.792982  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:07:45.792991  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:07:45 GMT
	I0116 03:07:45.793144  491150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","resourceVersion":"861","creationTimestamp":"2024-01-16T02:57:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_57_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:57:07Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0116 03:07:46.286782  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-vwqvk
	I0116 03:07:46.286813  491150 round_trippers.go:469] Request Headers:
	I0116 03:07:46.286827  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:07:46.286840  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:07:46.300997  491150 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0116 03:07:46.301022  491150 round_trippers.go:577] Response Headers:
	I0116 03:07:46.301030  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:07:46.301036  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:07:46 GMT
	I0116 03:07:46.301041  491150 round_trippers.go:580]     Audit-Id: 024f9848-3c8a-4824-bec2-f77351c8b9e4
	I0116 03:07:46.301046  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:07:46.301051  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:07:46.301056  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:07:46.301206  491150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-vwqvk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"096151e2-c59c-4dcf-bd29-2029901902c9","resourceVersion":"798","creationTimestamp":"2024-01-16T02:57:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c3967b3a-de7c-4ccd-af3a-dd2a9e8b71f8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c3967b3a-de7c-4ccd-af3a-dd2a9e8b71f8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I0116 03:07:46.301722  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494
	I0116 03:07:46.301737  491150 round_trippers.go:469] Request Headers:
	I0116 03:07:46.301745  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:07:46.301751  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:07:46.322806  491150 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I0116 03:07:46.322834  491150 round_trippers.go:577] Response Headers:
	I0116 03:07:46.322841  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:07:46 GMT
	I0116 03:07:46.322847  491150 round_trippers.go:580]     Audit-Id: 97fb51c5-e198-4aa5-876d-be2f9b5299e8
	I0116 03:07:46.322852  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:07:46.322859  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:07:46.322867  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:07:46.322875  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:07:46.323508  491150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","resourceVersion":"861","creationTimestamp":"2024-01-16T02:57:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_57_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:57:07Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0116 03:07:46.323908  491150 pod_ready.go:102] pod "coredns-5dd5756b68-vwqvk" in "kube-system" namespace has status "Ready":"False"
	I0116 03:07:46.786841  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-vwqvk
	I0116 03:07:46.786865  491150 round_trippers.go:469] Request Headers:
	I0116 03:07:46.786873  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:07:46.786880  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:07:46.789772  491150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:07:46.789795  491150 round_trippers.go:577] Response Headers:
	I0116 03:07:46.789803  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:07:46.789809  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:07:46 GMT
	I0116 03:07:46.789814  491150 round_trippers.go:580]     Audit-Id: 926e6026-3bd9-4fdd-a2bf-2b06f99526f8
	I0116 03:07:46.789825  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:07:46.789833  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:07:46.789841  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:07:46.790015  491150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-vwqvk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"096151e2-c59c-4dcf-bd29-2029901902c9","resourceVersion":"798","creationTimestamp":"2024-01-16T02:57:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c3967b3a-de7c-4ccd-af3a-dd2a9e8b71f8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c3967b3a-de7c-4ccd-af3a-dd2a9e8b71f8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I0116 03:07:46.790554  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494
	I0116 03:07:46.790569  491150 round_trippers.go:469] Request Headers:
	I0116 03:07:46.790577  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:07:46.790583  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:07:46.793662  491150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:07:46.793677  491150 round_trippers.go:577] Response Headers:
	I0116 03:07:46.793683  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:07:46 GMT
	I0116 03:07:46.793691  491150 round_trippers.go:580]     Audit-Id: ec1f6e7b-14d4-457f-afc2-5fd35377c277
	I0116 03:07:46.793699  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:07:46.793708  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:07:46.793716  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:07:46.793725  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:07:46.794147  491150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","resourceVersion":"861","creationTimestamp":"2024-01-16T02:57:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_57_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:57:07Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0116 03:07:47.286815  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-vwqvk
	I0116 03:07:47.286844  491150 round_trippers.go:469] Request Headers:
	I0116 03:07:47.286853  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:07:47.286859  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:07:47.289658  491150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:07:47.289690  491150 round_trippers.go:577] Response Headers:
	I0116 03:07:47.289698  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:07:47.289705  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:07:47.289710  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:07:47.289715  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:07:47 GMT
	I0116 03:07:47.289721  491150 round_trippers.go:580]     Audit-Id: 9e4edb30-a6c2-44da-a1c8-240e88098fab
	I0116 03:07:47.289726  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:07:47.290186  491150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-vwqvk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"096151e2-c59c-4dcf-bd29-2029901902c9","resourceVersion":"892","creationTimestamp":"2024-01-16T02:57:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c3967b3a-de7c-4ccd-af3a-dd2a9e8b71f8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c3967b3a-de7c-4ccd-af3a-dd2a9e8b71f8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6264 chars]
	I0116 03:07:47.290803  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494
	I0116 03:07:47.290820  491150 round_trippers.go:469] Request Headers:
	I0116 03:07:47.290832  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:07:47.290842  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:07:47.293684  491150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:07:47.293705  491150 round_trippers.go:577] Response Headers:
	I0116 03:07:47.293713  491150 round_trippers.go:580]     Audit-Id: 0b35a910-71ce-4c00-ba9b-d0c4568cb77f
	I0116 03:07:47.293719  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:07:47.293724  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:07:47.293728  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:07:47.293734  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:07:47.293739  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:07:47 GMT
	I0116 03:07:47.294164  491150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","resourceVersion":"861","creationTimestamp":"2024-01-16T02:57:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_57_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:57:07Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0116 03:07:47.294575  491150 pod_ready.go:92] pod "coredns-5dd5756b68-vwqvk" in "kube-system" namespace has status "Ready":"True"
	I0116 03:07:47.294598  491150 pod_ready.go:81] duration metric: took 8.008351905s waiting for pod "coredns-5dd5756b68-vwqvk" in "kube-system" namespace to be "Ready" ...
	I0116 03:07:47.294618  491150 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-405494" in "kube-system" namespace to be "Ready" ...
	I0116 03:07:47.294704  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-405494
	I0116 03:07:47.294713  491150 round_trippers.go:469] Request Headers:
	I0116 03:07:47.294723  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:07:47.294733  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:07:47.297517  491150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:07:47.297533  491150 round_trippers.go:577] Response Headers:
	I0116 03:07:47.297540  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:07:47.297545  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:07:47.297551  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:07:47 GMT
	I0116 03:07:47.297556  491150 round_trippers.go:580]     Audit-Id: 1b7cf67a-ed67-4e29-a968-76c94e35c3be
	I0116 03:07:47.297560  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:07:47.297566  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:07:47.297710  491150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-405494","namespace":"kube-system","uid":"3f839da7-c0c0-4546-8848-1557cbf50722","resourceVersion":"866","creationTimestamp":"2024-01-16T02:57:12Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.70:2379","kubernetes.io/config.hash":"3a9e67d0e87fe64d9531234ab850034d","kubernetes.io/config.mirror":"3a9e67d0e87fe64d9531234ab850034d","kubernetes.io/config.seen":"2024-01-16T02:57:11.711592151Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5843 chars]
	I0116 03:07:47.298177  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494
	I0116 03:07:47.298195  491150 round_trippers.go:469] Request Headers:
	I0116 03:07:47.298207  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:07:47.298216  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:07:47.300336  491150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:07:47.300352  491150 round_trippers.go:577] Response Headers:
	I0116 03:07:47.300361  491150 round_trippers.go:580]     Audit-Id: d84001ca-8501-4566-a2e7-67a89d152dff
	I0116 03:07:47.300366  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:07:47.300371  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:07:47.300377  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:07:47.300385  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:07:47.300394  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:07:47 GMT
	I0116 03:07:47.300554  491150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","resourceVersion":"861","creationTimestamp":"2024-01-16T02:57:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_57_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:57:07Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0116 03:07:47.300833  491150 pod_ready.go:92] pod "etcd-multinode-405494" in "kube-system" namespace has status "Ready":"True"
	I0116 03:07:47.300847  491150 pod_ready.go:81] duration metric: took 6.223319ms waiting for pod "etcd-multinode-405494" in "kube-system" namespace to be "Ready" ...
	I0116 03:07:47.300863  491150 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-405494" in "kube-system" namespace to be "Ready" ...
	I0116 03:07:47.300917  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-405494
	I0116 03:07:47.300924  491150 round_trippers.go:469] Request Headers:
	I0116 03:07:47.300930  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:07:47.300936  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:07:47.303332  491150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:07:47.303345  491150 round_trippers.go:577] Response Headers:
	I0116 03:07:47.303351  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:07:47.303356  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:07:47.303362  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:07:47.303366  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:07:47 GMT
	I0116 03:07:47.303371  491150 round_trippers.go:580]     Audit-Id: 0ac6308f-469e-4d78-894a-2e7791641956
	I0116 03:07:47.303376  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:07:47.303604  491150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-405494","namespace":"kube-system","uid":"e242d3cf-6cf7-4b47-8d3e-a83e484554a1","resourceVersion":"882","creationTimestamp":"2024-01-16T02:57:09Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.70:8443","kubernetes.io/config.hash":"04bffd1a6d3ee0aae068c41e37830c9b","kubernetes.io/config.mirror":"04bffd1a6d3ee0aae068c41e37830c9b","kubernetes.io/config.seen":"2024-01-16T02:57:02.078602539Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7380 chars]
	I0116 03:07:47.303981  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494
	I0116 03:07:47.303991  491150 round_trippers.go:469] Request Headers:
	I0116 03:07:47.303998  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:07:47.304004  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:07:47.305871  491150 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0116 03:07:47.305890  491150 round_trippers.go:577] Response Headers:
	I0116 03:07:47.305898  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:07:47.305903  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:07:47 GMT
	I0116 03:07:47.305908  491150 round_trippers.go:580]     Audit-Id: 71b0b49d-e404-4079-8f98-5b38aa808523
	I0116 03:07:47.305913  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:07:47.305922  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:07:47.305927  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:07:47.306220  491150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","resourceVersion":"861","creationTimestamp":"2024-01-16T02:57:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_57_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:57:07Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0116 03:07:47.306500  491150 pod_ready.go:92] pod "kube-apiserver-multinode-405494" in "kube-system" namespace has status "Ready":"True"
	I0116 03:07:47.306515  491150 pod_ready.go:81] duration metric: took 5.646292ms waiting for pod "kube-apiserver-multinode-405494" in "kube-system" namespace to be "Ready" ...
	I0116 03:07:47.306524  491150 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-405494" in "kube-system" namespace to be "Ready" ...
	I0116 03:07:47.306575  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-405494
	I0116 03:07:47.306583  491150 round_trippers.go:469] Request Headers:
	I0116 03:07:47.306590  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:07:47.306596  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:07:47.308463  491150 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0116 03:07:47.308481  491150 round_trippers.go:577] Response Headers:
	I0116 03:07:47.308490  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:07:47 GMT
	I0116 03:07:47.308498  491150 round_trippers.go:580]     Audit-Id: ee94fb7d-79e8-4148-ba20-8ae48be4871e
	I0116 03:07:47.308506  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:07:47.308514  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:07:47.308522  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:07:47.308531  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:07:47.308699  491150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-405494","namespace":"kube-system","uid":"0833b412-8909-4660-8e16-19701683358e","resourceVersion":"880","creationTimestamp":"2024-01-16T02:57:12Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"9eb78063d6e219f3cc5940494bdab4b2","kubernetes.io/config.mirror":"9eb78063d6e219f3cc5940494bdab4b2","kubernetes.io/config.seen":"2024-01-16T02:57:11.711589408Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6950 chars]
	I0116 03:07:47.309051  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494
	I0116 03:07:47.309061  491150 round_trippers.go:469] Request Headers:
	I0116 03:07:47.309068  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:07:47.309074  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:07:47.310925  491150 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0116 03:07:47.310943  491150 round_trippers.go:577] Response Headers:
	I0116 03:07:47.310951  491150 round_trippers.go:580]     Audit-Id: 0c7c7765-e951-4032-ac74-66960f607bed
	I0116 03:07:47.310956  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:07:47.310961  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:07:47.310966  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:07:47.310971  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:07:47.310980  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:07:47 GMT
	I0116 03:07:47.311208  491150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","resourceVersion":"861","creationTimestamp":"2024-01-16T02:57:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_57_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:57:07Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0116 03:07:47.311470  491150 pod_ready.go:92] pod "kube-controller-manager-multinode-405494" in "kube-system" namespace has status "Ready":"True"
	I0116 03:07:47.311484  491150 pod_ready.go:81] duration metric: took 4.954593ms waiting for pod "kube-controller-manager-multinode-405494" in "kube-system" namespace to be "Ready" ...
	I0116 03:07:47.311494  491150 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gg8kv" in "kube-system" namespace to be "Ready" ...
	I0116 03:07:47.311539  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gg8kv
	I0116 03:07:47.311548  491150 round_trippers.go:469] Request Headers:
	I0116 03:07:47.311555  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:07:47.311560  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:07:47.314011  491150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:07:47.314030  491150 round_trippers.go:577] Response Headers:
	I0116 03:07:47.314038  491150 round_trippers.go:580]     Audit-Id: a7775430-8ccc-4f06-b6b6-67a0ab181055
	I0116 03:07:47.314046  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:07:47.314054  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:07:47.314062  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:07:47.314069  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:07:47.314077  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:07:47 GMT
	I0116 03:07:47.314226  491150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-gg8kv","generateName":"kube-proxy-","namespace":"kube-system","uid":"32841b88-1b06-46ed-b4ce-f73301ec0a85","resourceVersion":"838","creationTimestamp":"2024-01-16T02:57:23Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"fdad84fd-2af6-4360-a899-0f70f257935c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fdad84fd-2af6-4360-a899-0f70f257935c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5514 chars]
	I0116 03:07:47.314601  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494
	I0116 03:07:47.314613  491150 round_trippers.go:469] Request Headers:
	I0116 03:07:47.314621  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:07:47.314626  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:07:47.316524  491150 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0116 03:07:47.316540  491150 round_trippers.go:577] Response Headers:
	I0116 03:07:47.316549  491150 round_trippers.go:580]     Audit-Id: b8bcf82f-b7cb-40a9-aaa2-0c6e097260ad
	I0116 03:07:47.316557  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:07:47.316565  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:07:47.316573  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:07:47.316582  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:07:47.316590  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:07:47 GMT
	I0116 03:07:47.316768  491150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","resourceVersion":"861","creationTimestamp":"2024-01-16T02:57:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_57_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:57:07Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0116 03:07:47.317035  491150 pod_ready.go:92] pod "kube-proxy-gg8kv" in "kube-system" namespace has status "Ready":"True"
	I0116 03:07:47.317051  491150 pod_ready.go:81] duration metric: took 5.551211ms waiting for pod "kube-proxy-gg8kv" in "kube-system" namespace to be "Ready" ...
	I0116 03:07:47.317059  491150 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ghscp" in "kube-system" namespace to be "Ready" ...
	I0116 03:07:47.487503  491150 request.go:629] Waited for 170.357967ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ghscp
	I0116 03:07:47.487571  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ghscp
	I0116 03:07:47.487576  491150 round_trippers.go:469] Request Headers:
	I0116 03:07:47.487583  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:07:47.487593  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:07:47.490474  491150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:07:47.490494  491150 round_trippers.go:577] Response Headers:
	I0116 03:07:47.490513  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:07:47 GMT
	I0116 03:07:47.490521  491150 round_trippers.go:580]     Audit-Id: 47bba025-16b0-483c-94a7-06cc4a7ad6f0
	I0116 03:07:47.490530  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:07:47.490539  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:07:47.490547  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:07:47.490554  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:07:47.490737  491150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-ghscp","generateName":"kube-proxy-","namespace":"kube-system","uid":"62b6191a-df8d-444d-9176-3f265fd2084d","resourceVersion":"708","creationTimestamp":"2024-01-16T02:58:49Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"fdad84fd-2af6-4360-a899-0f70f257935c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:58:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fdad84fd-2af6-4360-a899-0f70f257935c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I0116 03:07:47.687536  491150 request.go:629] Waited for 196.36651ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/nodes/multinode-405494-m03
	I0116 03:07:47.687615  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494-m03
	I0116 03:07:47.687620  491150 round_trippers.go:469] Request Headers:
	I0116 03:07:47.687629  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:07:47.687641  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:07:47.690429  491150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:07:47.690456  491150 round_trippers.go:577] Response Headers:
	I0116 03:07:47.690467  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:07:47.690473  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:07:47.690478  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:07:47.690484  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:07:47.690489  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:07:47 GMT
	I0116 03:07:47.690494  491150 round_trippers.go:580]     Audit-Id: 6d6b1c87-1ef1-4247-b931-963a151fcf49
	I0116 03:07:47.690669  491150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494-m03","uid":"f017bb37-2198-45f8-8920-a0a10585c3e0","resourceVersion":"869","creationTimestamp":"2024-01-16T02:59:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_59_33_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:59:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 3965 chars]
	I0116 03:07:47.690948  491150 pod_ready.go:92] pod "kube-proxy-ghscp" in "kube-system" namespace has status "Ready":"True"
	I0116 03:07:47.690963  491150 pod_ready.go:81] duration metric: took 373.898543ms waiting for pod "kube-proxy-ghscp" in "kube-system" namespace to be "Ready" ...
	I0116 03:07:47.690972  491150 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-m46rb" in "kube-system" namespace to be "Ready" ...
	I0116 03:07:47.887081  491150 request.go:629] Waited for 196.010206ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m46rb
	I0116 03:07:47.887148  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m46rb
	I0116 03:07:47.887153  491150 round_trippers.go:469] Request Headers:
	I0116 03:07:47.887161  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:07:47.887168  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:07:47.890664  491150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:07:47.890688  491150 round_trippers.go:577] Response Headers:
	I0116 03:07:47.890695  491150 round_trippers.go:580]     Audit-Id: b6b047e4-d6c5-46fe-b0c6-136c6ef562d5
	I0116 03:07:47.890701  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:07:47.890706  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:07:47.890711  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:07:47.890724  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:07:47.890729  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:07:47 GMT
	I0116 03:07:47.891451  491150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-m46rb","generateName":"kube-proxy-","namespace":"kube-system","uid":"960fb4d4-836f-42c5-9d56-03daae9f5a12","resourceVersion":"501","creationTimestamp":"2024-01-16T02:58:02Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"fdad84fd-2af6-4360-a899-0f70f257935c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:58:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fdad84fd-2af6-4360-a899-0f70f257935c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5522 chars]
	I0116 03:07:48.087295  491150 request.go:629] Waited for 195.385628ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/nodes/multinode-405494-m02
	I0116 03:07:48.087360  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494-m02
	I0116 03:07:48.087365  491150 round_trippers.go:469] Request Headers:
	I0116 03:07:48.087373  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:07:48.087379  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:07:48.090742  491150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:07:48.090763  491150 round_trippers.go:577] Response Headers:
	I0116 03:07:48.090771  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:07:48.090776  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:07:48 GMT
	I0116 03:07:48.090781  491150 round_trippers.go:580]     Audit-Id: db1960cf-df5b-47ad-80b9-ea9e05bca07a
	I0116 03:07:48.090786  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:07:48.090791  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:07:48.090796  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:07:48.091270  491150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494-m02","uid":"db05f602-4c14-49d7-93c1-517732722bbd","resourceVersion":"853","creationTimestamp":"2024-01-16T02:58:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_59_33_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:58:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 4235 chars]
	I0116 03:07:48.091585  491150 pod_ready.go:92] pod "kube-proxy-m46rb" in "kube-system" namespace has status "Ready":"True"
	I0116 03:07:48.091606  491150 pod_ready.go:81] duration metric: took 400.626845ms waiting for pod "kube-proxy-m46rb" in "kube-system" namespace to be "Ready" ...
	I0116 03:07:48.091631  491150 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-405494" in "kube-system" namespace to be "Ready" ...
	I0116 03:07:48.287735  491150 request.go:629] Waited for 195.982133ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-405494
	I0116 03:07:48.287798  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-405494
	I0116 03:07:48.287809  491150 round_trippers.go:469] Request Headers:
	I0116 03:07:48.287822  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:07:48.287834  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:07:48.290489  491150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:07:48.290515  491150 round_trippers.go:577] Response Headers:
	I0116 03:07:48.290526  491150 round_trippers.go:580]     Audit-Id: 9c006cb4-2959-4614-aa80-19c77daeaa73
	I0116 03:07:48.290532  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:07:48.290537  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:07:48.290542  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:07:48.290547  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:07:48.290552  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:07:48 GMT
	I0116 03:07:48.290746  491150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-405494","namespace":"kube-system","uid":"70c980cb-4ff9-45f5-960f-d8afa355229c","resourceVersion":"884","creationTimestamp":"2024-01-16T02:57:11Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"65069d20830c0b10a3d28746871e48c2","kubernetes.io/config.mirror":"65069d20830c0b10a3d28746871e48c2","kubernetes.io/config.seen":"2024-01-16T02:57:02.078604553Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4680 chars]
	I0116 03:07:48.487573  491150 request.go:629] Waited for 196.411233ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/nodes/multinode-405494
	I0116 03:07:48.487673  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494
	I0116 03:07:48.487685  491150 round_trippers.go:469] Request Headers:
	I0116 03:07:48.487723  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:07:48.487735  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:07:48.490605  491150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:07:48.490637  491150 round_trippers.go:577] Response Headers:
	I0116 03:07:48.490647  491150 round_trippers.go:580]     Audit-Id: ee42d968-08ed-4b0d-9be0-aa156368a00a
	I0116 03:07:48.490660  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:07:48.490668  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:07:48.490676  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:07:48.490686  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:07:48.490699  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:07:48 GMT
	I0116 03:07:48.490840  491150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","resourceVersion":"861","creationTimestamp":"2024-01-16T02:57:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_57_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:57:07Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0116 03:07:48.491199  491150 pod_ready.go:92] pod "kube-scheduler-multinode-405494" in "kube-system" namespace has status "Ready":"True"
	I0116 03:07:48.491223  491150 pod_ready.go:81] duration metric: took 399.57914ms waiting for pod "kube-scheduler-multinode-405494" in "kube-system" namespace to be "Ready" ...
	I0116 03:07:48.491238  491150 pod_ready.go:38] duration metric: took 9.212842384s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:07:48.491263  491150 api_server.go:52] waiting for apiserver process to appear ...
	I0116 03:07:48.491342  491150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:07:48.507522  491150 command_runner.go:130] > 1072
	I0116 03:07:48.507726  491150 api_server.go:72] duration metric: took 14.844123221s to wait for apiserver process to appear ...
	I0116 03:07:48.507754  491150 api_server.go:88] waiting for apiserver healthz status ...
	I0116 03:07:48.507782  491150 api_server.go:253] Checking apiserver healthz at https://192.168.39.70:8443/healthz ...
	I0116 03:07:48.513238  491150 api_server.go:279] https://192.168.39.70:8443/healthz returned 200:
	ok
	I0116 03:07:48.513335  491150 round_trippers.go:463] GET https://192.168.39.70:8443/version
	I0116 03:07:48.513347  491150 round_trippers.go:469] Request Headers:
	I0116 03:07:48.513358  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:07:48.513371  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:07:48.514595  491150 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0116 03:07:48.514621  491150 round_trippers.go:577] Response Headers:
	I0116 03:07:48.514628  491150 round_trippers.go:580]     Content-Length: 264
	I0116 03:07:48.514634  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:07:48 GMT
	I0116 03:07:48.514639  491150 round_trippers.go:580]     Audit-Id: ecbdddd8-4383-42bc-9aaf-921f1797749f
	I0116 03:07:48.514647  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:07:48.514656  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:07:48.514665  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:07:48.514673  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:07:48.514697  491150 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0116 03:07:48.514744  491150 api_server.go:141] control plane version: v1.28.4
	I0116 03:07:48.514765  491150 api_server.go:131] duration metric: took 7.004328ms to wait for apiserver health ...
	I0116 03:07:48.514776  491150 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 03:07:48.687260  491150 request.go:629] Waited for 172.396282ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods
	I0116 03:07:48.687349  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods
	I0116 03:07:48.687355  491150 round_trippers.go:469] Request Headers:
	I0116 03:07:48.687366  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:07:48.687388  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:07:48.692540  491150 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0116 03:07:48.692577  491150 round_trippers.go:577] Response Headers:
	I0116 03:07:48.692589  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:07:48.692596  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:07:48 GMT
	I0116 03:07:48.692603  491150 round_trippers.go:580]     Audit-Id: 22c8b789-6bdc-4942-a93e-eb524af38034
	I0116 03:07:48.692610  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:07:48.692618  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:07:48.692626  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:07:48.693276  491150 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"902"},"items":[{"metadata":{"name":"coredns-5dd5756b68-vwqvk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"096151e2-c59c-4dcf-bd29-2029901902c9","resourceVersion":"892","creationTimestamp":"2024-01-16T02:57:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c3967b3a-de7c-4ccd-af3a-dd2a9e8b71f8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c3967b3a-de7c-4ccd-af3a-dd2a9e8b71f8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82454 chars]
	I0116 03:07:48.695839  491150 system_pods.go:59] 12 kube-system pods found
	I0116 03:07:48.695868  491150 system_pods.go:61] "coredns-5dd5756b68-vwqvk" [096151e2-c59c-4dcf-bd29-2029901902c9] Running
	I0116 03:07:48.695875  491150 system_pods.go:61] "etcd-multinode-405494" [3f839da7-c0c0-4546-8848-1557cbf50722] Running
	I0116 03:07:48.695885  491150 system_pods.go:61] "kindnet-6zhtt" [cb3b1d86-ad5f-404c-84f7-f51f255843fc] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0116 03:07:48.695893  491150 system_pods.go:61] "kindnet-8t86n" [4d421823-26dd-467d-94d4-28387c8e3793] Running
	I0116 03:07:48.695901  491150 system_pods.go:61] "kindnet-ddd2h" [9a8dfd54-cf69-402a-9233-af3a696abaa0] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0116 03:07:48.695907  491150 system_pods.go:61] "kube-apiserver-multinode-405494" [e242d3cf-6cf7-4b47-8d3e-a83e484554a1] Running
	I0116 03:07:48.695916  491150 system_pods.go:61] "kube-controller-manager-multinode-405494" [0833b412-8909-4660-8e16-19701683358e] Running
	I0116 03:07:48.695925  491150 system_pods.go:61] "kube-proxy-gg8kv" [32841b88-1b06-46ed-b4ce-f73301ec0a85] Running
	I0116 03:07:48.695935  491150 system_pods.go:61] "kube-proxy-ghscp" [62b6191a-df8d-444d-9176-3f265fd2084d] Running
	I0116 03:07:48.695941  491150 system_pods.go:61] "kube-proxy-m46rb" [960fb4d4-836f-42c5-9d56-03daae9f5a12] Running
	I0116 03:07:48.695949  491150 system_pods.go:61] "kube-scheduler-multinode-405494" [70c980cb-4ff9-45f5-960f-d8afa355229c] Running
	I0116 03:07:48.695961  491150 system_pods.go:61] "storage-provisioner" [c6f12cfa-46b3-4840-a7e2-258c063a19c2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0116 03:07:48.695972  491150 system_pods.go:74] duration metric: took 181.185501ms to wait for pod list to return data ...
	I0116 03:07:48.695993  491150 default_sa.go:34] waiting for default service account to be created ...
	I0116 03:07:48.887442  491150 request.go:629] Waited for 191.286191ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/namespaces/default/serviceaccounts
	I0116 03:07:48.887506  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/default/serviceaccounts
	I0116 03:07:48.887511  491150 round_trippers.go:469] Request Headers:
	I0116 03:07:48.887524  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:07:48.887533  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:07:48.894556  491150 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0116 03:07:48.894588  491150 round_trippers.go:577] Response Headers:
	I0116 03:07:48.894601  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:07:48.894621  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:07:48.894629  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:07:48.894636  491150 round_trippers.go:580]     Content-Length: 261
	I0116 03:07:48.894644  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:07:48 GMT
	I0116 03:07:48.894653  491150 round_trippers.go:580]     Audit-Id: 83b4c721-98ee-46d2-a71d-3ec91fa9f1dd
	I0116 03:07:48.894665  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:07:48.894711  491150 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"904"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"0b22b347-fa63-4799-be1d-4dd899f85a07","resourceVersion":"341","creationTimestamp":"2024-01-16T02:57:23Z"}}]}
	I0116 03:07:48.894916  491150 default_sa.go:45] found service account: "default"
	I0116 03:07:48.894941  491150 default_sa.go:55] duration metric: took 198.940263ms for default service account to be created ...
	I0116 03:07:48.894955  491150 system_pods.go:116] waiting for k8s-apps to be running ...
	I0116 03:07:49.087440  491150 request.go:629] Waited for 192.407765ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods
	I0116 03:07:49.087519  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods
	I0116 03:07:49.087525  491150 round_trippers.go:469] Request Headers:
	I0116 03:07:49.087533  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:07:49.087541  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:07:49.092320  491150 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 03:07:49.092348  491150 round_trippers.go:577] Response Headers:
	I0116 03:07:49.092359  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:07:49 GMT
	I0116 03:07:49.092368  491150 round_trippers.go:580]     Audit-Id: 4b474852-68b1-44fa-b46e-bbc842366d43
	I0116 03:07:49.092375  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:07:49.092382  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:07:49.092389  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:07:49.092396  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:07:49.093566  491150 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"906"},"items":[{"metadata":{"name":"coredns-5dd5756b68-vwqvk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"096151e2-c59c-4dcf-bd29-2029901902c9","resourceVersion":"892","creationTimestamp":"2024-01-16T02:57:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c3967b3a-de7c-4ccd-af3a-dd2a9e8b71f8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c3967b3a-de7c-4ccd-af3a-dd2a9e8b71f8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82436 chars]
	I0116 03:07:49.096102  491150 system_pods.go:86] 12 kube-system pods found
	I0116 03:07:49.096146  491150 system_pods.go:89] "coredns-5dd5756b68-vwqvk" [096151e2-c59c-4dcf-bd29-2029901902c9] Running
	I0116 03:07:49.096154  491150 system_pods.go:89] "etcd-multinode-405494" [3f839da7-c0c0-4546-8848-1557cbf50722] Running
	I0116 03:07:49.096163  491150 system_pods.go:89] "kindnet-6zhtt" [cb3b1d86-ad5f-404c-84f7-f51f255843fc] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0116 03:07:49.096172  491150 system_pods.go:89] "kindnet-8t86n" [4d421823-26dd-467d-94d4-28387c8e3793] Running
	I0116 03:07:49.096181  491150 system_pods.go:89] "kindnet-ddd2h" [9a8dfd54-cf69-402a-9233-af3a696abaa0] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0116 03:07:49.096192  491150 system_pods.go:89] "kube-apiserver-multinode-405494" [e242d3cf-6cf7-4b47-8d3e-a83e484554a1] Running
	I0116 03:07:49.096206  491150 system_pods.go:89] "kube-controller-manager-multinode-405494" [0833b412-8909-4660-8e16-19701683358e] Running
	I0116 03:07:49.096215  491150 system_pods.go:89] "kube-proxy-gg8kv" [32841b88-1b06-46ed-b4ce-f73301ec0a85] Running
	I0116 03:07:49.096225  491150 system_pods.go:89] "kube-proxy-ghscp" [62b6191a-df8d-444d-9176-3f265fd2084d] Running
	I0116 03:07:49.096235  491150 system_pods.go:89] "kube-proxy-m46rb" [960fb4d4-836f-42c5-9d56-03daae9f5a12] Running
	I0116 03:07:49.096245  491150 system_pods.go:89] "kube-scheduler-multinode-405494" [70c980cb-4ff9-45f5-960f-d8afa355229c] Running
	I0116 03:07:49.096258  491150 system_pods.go:89] "storage-provisioner" [c6f12cfa-46b3-4840-a7e2-258c063a19c2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0116 03:07:49.096273  491150 system_pods.go:126] duration metric: took 201.310114ms to wait for k8s-apps to be running ...
	I0116 03:07:49.096284  491150 system_svc.go:44] waiting for kubelet service to be running ....
	I0116 03:07:49.096337  491150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:07:49.113911  491150 system_svc.go:56] duration metric: took 17.609875ms WaitForService to wait for kubelet.
	I0116 03:07:49.113943  491150 kubeadm.go:581] duration metric: took 15.450347205s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0116 03:07:49.113967  491150 node_conditions.go:102] verifying NodePressure condition ...
	I0116 03:07:49.287392  491150 request.go:629] Waited for 173.343405ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/nodes
	I0116 03:07:49.287470  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes
	I0116 03:07:49.287475  491150 round_trippers.go:469] Request Headers:
	I0116 03:07:49.287483  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:07:49.287493  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:07:49.291170  491150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:07:49.291198  491150 round_trippers.go:577] Response Headers:
	I0116 03:07:49.291208  491150 round_trippers.go:580]     Audit-Id: 58a684da-0235-46ef-960f-7198e247c1c3
	I0116 03:07:49.291217  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:07:49.291225  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:07:49.291236  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:07:49.291252  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:07:49.291259  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:07:49 GMT
	I0116 03:07:49.291513  491150 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"907"},"items":[{"metadata":{"name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","resourceVersion":"861","creationTimestamp":"2024-01-16T02:57:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_57_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 16178 chars]
	I0116 03:07:49.292303  491150 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 03:07:49.292328  491150 node_conditions.go:123] node cpu capacity is 2
	I0116 03:07:49.292340  491150 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 03:07:49.292344  491150 node_conditions.go:123] node cpu capacity is 2
	I0116 03:07:49.292350  491150 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 03:07:49.292354  491150 node_conditions.go:123] node cpu capacity is 2
	I0116 03:07:49.292358  491150 node_conditions.go:105] duration metric: took 178.38617ms to run NodePressure ...
	I0116 03:07:49.292370  491150 start.go:228] waiting for startup goroutines ...
	I0116 03:07:49.292380  491150 start.go:233] waiting for cluster config update ...
	I0116 03:07:49.292387  491150 start.go:242] writing updated cluster config ...
	I0116 03:07:49.292867  491150 config.go:182] Loaded profile config "multinode-405494": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 03:07:49.292947  491150 profile.go:148] Saving config to /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/multinode-405494/config.json ...
	I0116 03:07:49.295785  491150 out.go:177] * Starting worker node multinode-405494-m02 in cluster multinode-405494
	I0116 03:07:49.297367  491150 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 03:07:49.297398  491150 cache.go:56] Caching tarball of preloaded images
	I0116 03:07:49.297517  491150 preload.go:174] Found /home/jenkins/minikube-integration/17965-468241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0116 03:07:49.297534  491150 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0116 03:07:49.297667  491150 profile.go:148] Saving config to /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/multinode-405494/config.json ...
	I0116 03:07:49.297894  491150 start.go:365] acquiring machines lock for multinode-405494-m02: {Name:mk901e5fae8c1a578d989b520053c709bdbdcf06 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0116 03:07:49.297967  491150 start.go:369] acquired machines lock for "multinode-405494-m02" in 38.629µs
	I0116 03:07:49.297991  491150 start.go:96] Skipping create...Using existing machine configuration
	I0116 03:07:49.297999  491150 fix.go:54] fixHost starting: m02
	I0116 03:07:49.298316  491150 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:07:49.298354  491150 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:07:49.313258  491150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45735
	I0116 03:07:49.313761  491150 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:07:49.314315  491150 main.go:141] libmachine: Using API Version  1
	I0116 03:07:49.314342  491150 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:07:49.314653  491150 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:07:49.314854  491150 main.go:141] libmachine: (multinode-405494-m02) Calling .DriverName
	I0116 03:07:49.314996  491150 main.go:141] libmachine: (multinode-405494-m02) Calling .GetState
	I0116 03:07:49.316756  491150 fix.go:102] recreateIfNeeded on multinode-405494-m02: state=Running err=<nil>
	W0116 03:07:49.316776  491150 fix.go:128] unexpected machine state, will restart: <nil>
	I0116 03:07:49.319070  491150 out.go:177] * Updating the running kvm2 "multinode-405494-m02" VM ...
	I0116 03:07:49.320831  491150 machine.go:88] provisioning docker machine ...
	I0116 03:07:49.320870  491150 main.go:141] libmachine: (multinode-405494-m02) Calling .DriverName
	I0116 03:07:49.321157  491150 main.go:141] libmachine: (multinode-405494-m02) Calling .GetMachineName
	I0116 03:07:49.321368  491150 buildroot.go:166] provisioning hostname "multinode-405494-m02"
	I0116 03:07:49.321393  491150 main.go:141] libmachine: (multinode-405494-m02) Calling .GetMachineName
	I0116 03:07:49.321533  491150 main.go:141] libmachine: (multinode-405494-m02) Calling .GetSSHHostname
	I0116 03:07:49.324392  491150 main.go:141] libmachine: (multinode-405494-m02) DBG | domain multinode-405494-m02 has defined MAC address 52:54:00:3c:08:8b in network mk-multinode-405494
	I0116 03:07:49.324982  491150 main.go:141] libmachine: (multinode-405494-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:08:8b", ip: ""} in network mk-multinode-405494: {Iface:virbr1 ExpiryTime:2024-01-16 03:57:47 +0000 UTC Type:0 Mac:52:54:00:3c:08:8b Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-405494-m02 Clientid:01:52:54:00:3c:08:8b}
	I0116 03:07:49.325013  491150 main.go:141] libmachine: (multinode-405494-m02) DBG | domain multinode-405494-m02 has defined IP address 192.168.39.32 and MAC address 52:54:00:3c:08:8b in network mk-multinode-405494
	I0116 03:07:49.325229  491150 main.go:141] libmachine: (multinode-405494-m02) Calling .GetSSHPort
	I0116 03:07:49.325423  491150 main.go:141] libmachine: (multinode-405494-m02) Calling .GetSSHKeyPath
	I0116 03:07:49.325573  491150 main.go:141] libmachine: (multinode-405494-m02) Calling .GetSSHKeyPath
	I0116 03:07:49.325699  491150 main.go:141] libmachine: (multinode-405494-m02) Calling .GetSSHUsername
	I0116 03:07:49.325892  491150 main.go:141] libmachine: Using SSH client type: native
	I0116 03:07:49.326229  491150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0116 03:07:49.326245  491150 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-405494-m02 && echo "multinode-405494-m02" | sudo tee /etc/hostname
	I0116 03:07:49.458957  491150 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-405494-m02
	
	I0116 03:07:49.458996  491150 main.go:141] libmachine: (multinode-405494-m02) Calling .GetSSHHostname
	I0116 03:07:49.462326  491150 main.go:141] libmachine: (multinode-405494-m02) DBG | domain multinode-405494-m02 has defined MAC address 52:54:00:3c:08:8b in network mk-multinode-405494
	I0116 03:07:49.462692  491150 main.go:141] libmachine: (multinode-405494-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:08:8b", ip: ""} in network mk-multinode-405494: {Iface:virbr1 ExpiryTime:2024-01-16 03:57:47 +0000 UTC Type:0 Mac:52:54:00:3c:08:8b Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-405494-m02 Clientid:01:52:54:00:3c:08:8b}
	I0116 03:07:49.462722  491150 main.go:141] libmachine: (multinode-405494-m02) DBG | domain multinode-405494-m02 has defined IP address 192.168.39.32 and MAC address 52:54:00:3c:08:8b in network mk-multinode-405494
	I0116 03:07:49.462924  491150 main.go:141] libmachine: (multinode-405494-m02) Calling .GetSSHPort
	I0116 03:07:49.463216  491150 main.go:141] libmachine: (multinode-405494-m02) Calling .GetSSHKeyPath
	I0116 03:07:49.463435  491150 main.go:141] libmachine: (multinode-405494-m02) Calling .GetSSHKeyPath
	I0116 03:07:49.463582  491150 main.go:141] libmachine: (multinode-405494-m02) Calling .GetSSHUsername
	I0116 03:07:49.463764  491150 main.go:141] libmachine: Using SSH client type: native
	I0116 03:07:49.464137  491150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0116 03:07:49.464163  491150 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-405494-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-405494-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-405494-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 03:07:49.582517  491150 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 03:07:49.582551  491150 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17965-468241/.minikube CaCertPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17965-468241/.minikube}
	I0116 03:07:49.582568  491150 buildroot.go:174] setting up certificates
	I0116 03:07:49.582581  491150 provision.go:83] configureAuth start
	I0116 03:07:49.582590  491150 main.go:141] libmachine: (multinode-405494-m02) Calling .GetMachineName
	I0116 03:07:49.582951  491150 main.go:141] libmachine: (multinode-405494-m02) Calling .GetIP
	I0116 03:07:49.585987  491150 main.go:141] libmachine: (multinode-405494-m02) DBG | domain multinode-405494-m02 has defined MAC address 52:54:00:3c:08:8b in network mk-multinode-405494
	I0116 03:07:49.586435  491150 main.go:141] libmachine: (multinode-405494-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:08:8b", ip: ""} in network mk-multinode-405494: {Iface:virbr1 ExpiryTime:2024-01-16 03:57:47 +0000 UTC Type:0 Mac:52:54:00:3c:08:8b Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-405494-m02 Clientid:01:52:54:00:3c:08:8b}
	I0116 03:07:49.586470  491150 main.go:141] libmachine: (multinode-405494-m02) DBG | domain multinode-405494-m02 has defined IP address 192.168.39.32 and MAC address 52:54:00:3c:08:8b in network mk-multinode-405494
	I0116 03:07:49.586614  491150 main.go:141] libmachine: (multinode-405494-m02) Calling .GetSSHHostname
	I0116 03:07:49.588946  491150 main.go:141] libmachine: (multinode-405494-m02) DBG | domain multinode-405494-m02 has defined MAC address 52:54:00:3c:08:8b in network mk-multinode-405494
	I0116 03:07:49.589322  491150 main.go:141] libmachine: (multinode-405494-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:08:8b", ip: ""} in network mk-multinode-405494: {Iface:virbr1 ExpiryTime:2024-01-16 03:57:47 +0000 UTC Type:0 Mac:52:54:00:3c:08:8b Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-405494-m02 Clientid:01:52:54:00:3c:08:8b}
	I0116 03:07:49.589362  491150 main.go:141] libmachine: (multinode-405494-m02) DBG | domain multinode-405494-m02 has defined IP address 192.168.39.32 and MAC address 52:54:00:3c:08:8b in network mk-multinode-405494
	I0116 03:07:49.589440  491150 provision.go:138] copyHostCerts
	I0116 03:07:49.589481  491150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem
	I0116 03:07:49.589534  491150 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem, removing ...
	I0116 03:07:49.589547  491150 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem
	I0116 03:07:49.589645  491150 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem (1123 bytes)
	I0116 03:07:49.589750  491150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem
	I0116 03:07:49.589775  491150 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem, removing ...
	I0116 03:07:49.589785  491150 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem
	I0116 03:07:49.589819  491150 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem (1679 bytes)
	I0116 03:07:49.589879  491150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem
	I0116 03:07:49.589899  491150 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem, removing ...
	I0116 03:07:49.589912  491150 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem
	I0116 03:07:49.589955  491150 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem (1078 bytes)
	I0116 03:07:49.590029  491150 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca-key.pem org=jenkins.multinode-405494-m02 san=[192.168.39.32 192.168.39.32 localhost 127.0.0.1 minikube multinode-405494-m02]
	I0116 03:07:49.847909  491150 provision.go:172] copyRemoteCerts
	I0116 03:07:49.847985  491150 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 03:07:49.848014  491150 main.go:141] libmachine: (multinode-405494-m02) Calling .GetSSHHostname
	I0116 03:07:49.850635  491150 main.go:141] libmachine: (multinode-405494-m02) DBG | domain multinode-405494-m02 has defined MAC address 52:54:00:3c:08:8b in network mk-multinode-405494
	I0116 03:07:49.850934  491150 main.go:141] libmachine: (multinode-405494-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:08:8b", ip: ""} in network mk-multinode-405494: {Iface:virbr1 ExpiryTime:2024-01-16 03:57:47 +0000 UTC Type:0 Mac:52:54:00:3c:08:8b Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-405494-m02 Clientid:01:52:54:00:3c:08:8b}
	I0116 03:07:49.850971  491150 main.go:141] libmachine: (multinode-405494-m02) DBG | domain multinode-405494-m02 has defined IP address 192.168.39.32 and MAC address 52:54:00:3c:08:8b in network mk-multinode-405494
	I0116 03:07:49.851144  491150 main.go:141] libmachine: (multinode-405494-m02) Calling .GetSSHPort
	I0116 03:07:49.851399  491150 main.go:141] libmachine: (multinode-405494-m02) Calling .GetSSHKeyPath
	I0116 03:07:49.851711  491150 main.go:141] libmachine: (multinode-405494-m02) Calling .GetSSHUsername
	I0116 03:07:49.851929  491150 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/multinode-405494-m02/id_rsa Username:docker}
	I0116 03:07:49.937947  491150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0116 03:07:49.938035  491150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0116 03:07:49.963286  491150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0116 03:07:49.963369  491150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0116 03:07:49.988977  491150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0116 03:07:49.989056  491150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0116 03:07:50.015741  491150 provision.go:86] duration metric: configureAuth took 433.146819ms
	I0116 03:07:50.015778  491150 buildroot.go:189] setting minikube options for container-runtime
	I0116 03:07:50.016059  491150 config.go:182] Loaded profile config "multinode-405494": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 03:07:50.016164  491150 main.go:141] libmachine: (multinode-405494-m02) Calling .GetSSHHostname
	I0116 03:07:50.019531  491150 main.go:141] libmachine: (multinode-405494-m02) DBG | domain multinode-405494-m02 has defined MAC address 52:54:00:3c:08:8b in network mk-multinode-405494
	I0116 03:07:50.019969  491150 main.go:141] libmachine: (multinode-405494-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:08:8b", ip: ""} in network mk-multinode-405494: {Iface:virbr1 ExpiryTime:2024-01-16 03:57:47 +0000 UTC Type:0 Mac:52:54:00:3c:08:8b Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-405494-m02 Clientid:01:52:54:00:3c:08:8b}
	I0116 03:07:50.019998  491150 main.go:141] libmachine: (multinode-405494-m02) DBG | domain multinode-405494-m02 has defined IP address 192.168.39.32 and MAC address 52:54:00:3c:08:8b in network mk-multinode-405494
	I0116 03:07:50.020291  491150 main.go:141] libmachine: (multinode-405494-m02) Calling .GetSSHPort
	I0116 03:07:50.020546  491150 main.go:141] libmachine: (multinode-405494-m02) Calling .GetSSHKeyPath
	I0116 03:07:50.020733  491150 main.go:141] libmachine: (multinode-405494-m02) Calling .GetSSHKeyPath
	I0116 03:07:50.020887  491150 main.go:141] libmachine: (multinode-405494-m02) Calling .GetSSHUsername
	I0116 03:07:50.021111  491150 main.go:141] libmachine: Using SSH client type: native
	I0116 03:07:50.021587  491150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0116 03:07:50.021619  491150 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 03:09:20.622401  491150 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 03:09:20.622429  491150 machine.go:91] provisioned docker machine in 1m31.30157609s
	I0116 03:09:20.622440  491150 start.go:300] post-start starting for "multinode-405494-m02" (driver="kvm2")
	I0116 03:09:20.622488  491150 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 03:09:20.622510  491150 main.go:141] libmachine: (multinode-405494-m02) Calling .DriverName
	I0116 03:09:20.622888  491150 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 03:09:20.622929  491150 main.go:141] libmachine: (multinode-405494-m02) Calling .GetSSHHostname
	I0116 03:09:20.625772  491150 main.go:141] libmachine: (multinode-405494-m02) DBG | domain multinode-405494-m02 has defined MAC address 52:54:00:3c:08:8b in network mk-multinode-405494
	I0116 03:09:20.626224  491150 main.go:141] libmachine: (multinode-405494-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:08:8b", ip: ""} in network mk-multinode-405494: {Iface:virbr1 ExpiryTime:2024-01-16 03:57:47 +0000 UTC Type:0 Mac:52:54:00:3c:08:8b Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-405494-m02 Clientid:01:52:54:00:3c:08:8b}
	I0116 03:09:20.626261  491150 main.go:141] libmachine: (multinode-405494-m02) DBG | domain multinode-405494-m02 has defined IP address 192.168.39.32 and MAC address 52:54:00:3c:08:8b in network mk-multinode-405494
	I0116 03:09:20.626421  491150 main.go:141] libmachine: (multinode-405494-m02) Calling .GetSSHPort
	I0116 03:09:20.626658  491150 main.go:141] libmachine: (multinode-405494-m02) Calling .GetSSHKeyPath
	I0116 03:09:20.626857  491150 main.go:141] libmachine: (multinode-405494-m02) Calling .GetSSHUsername
	I0116 03:09:20.627043  491150 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/multinode-405494-m02/id_rsa Username:docker}
	I0116 03:09:20.717759  491150 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 03:09:20.722036  491150 command_runner.go:130] > NAME=Buildroot
	I0116 03:09:20.722058  491150 command_runner.go:130] > VERSION=2021.02.12-1-g19d536a-dirty
	I0116 03:09:20.722065  491150 command_runner.go:130] > ID=buildroot
	I0116 03:09:20.722074  491150 command_runner.go:130] > VERSION_ID=2021.02.12
	I0116 03:09:20.722082  491150 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0116 03:09:20.722181  491150 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 03:09:20.722209  491150 filesync.go:126] Scanning /home/jenkins/minikube-integration/17965-468241/.minikube/addons for local assets ...
	I0116 03:09:20.722287  491150 filesync.go:126] Scanning /home/jenkins/minikube-integration/17965-468241/.minikube/files for local assets ...
	I0116 03:09:20.722379  491150 filesync.go:149] local asset: /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem -> 4754782.pem in /etc/ssl/certs
	I0116 03:09:20.722392  491150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem -> /etc/ssl/certs/4754782.pem
	I0116 03:09:20.722498  491150 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 03:09:20.731096  491150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem --> /etc/ssl/certs/4754782.pem (1708 bytes)
	I0116 03:09:20.756770  491150 start.go:303] post-start completed in 134.314334ms
	I0116 03:09:20.756795  491150 fix.go:56] fixHost completed within 1m31.458796649s
	I0116 03:09:20.756818  491150 main.go:141] libmachine: (multinode-405494-m02) Calling .GetSSHHostname
	I0116 03:09:20.759491  491150 main.go:141] libmachine: (multinode-405494-m02) DBG | domain multinode-405494-m02 has defined MAC address 52:54:00:3c:08:8b in network mk-multinode-405494
	I0116 03:09:20.759912  491150 main.go:141] libmachine: (multinode-405494-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:08:8b", ip: ""} in network mk-multinode-405494: {Iface:virbr1 ExpiryTime:2024-01-16 03:57:47 +0000 UTC Type:0 Mac:52:54:00:3c:08:8b Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-405494-m02 Clientid:01:52:54:00:3c:08:8b}
	I0116 03:09:20.759952  491150 main.go:141] libmachine: (multinode-405494-m02) DBG | domain multinode-405494-m02 has defined IP address 192.168.39.32 and MAC address 52:54:00:3c:08:8b in network mk-multinode-405494
	I0116 03:09:20.760114  491150 main.go:141] libmachine: (multinode-405494-m02) Calling .GetSSHPort
	I0116 03:09:20.760351  491150 main.go:141] libmachine: (multinode-405494-m02) Calling .GetSSHKeyPath
	I0116 03:09:20.760538  491150 main.go:141] libmachine: (multinode-405494-m02) Calling .GetSSHKeyPath
	I0116 03:09:20.760707  491150 main.go:141] libmachine: (multinode-405494-m02) Calling .GetSSHUsername
	I0116 03:09:20.760893  491150 main.go:141] libmachine: Using SSH client type: native
	I0116 03:09:20.761218  491150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0116 03:09:20.761230  491150 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0116 03:09:20.878708  491150 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705374560.872952443
	
	I0116 03:09:20.878734  491150 fix.go:206] guest clock: 1705374560.872952443
	I0116 03:09:20.878744  491150 fix.go:219] Guest: 2024-01-16 03:09:20.872952443 +0000 UTC Remote: 2024-01-16 03:09:20.756799701 +0000 UTC m=+457.161950075 (delta=116.152742ms)
	I0116 03:09:20.878766  491150 fix.go:190] guest clock delta is within tolerance: 116.152742ms
	I0116 03:09:20.878773  491150 start.go:83] releasing machines lock for "multinode-405494-m02", held for 1m31.580790602s
	I0116 03:09:20.878804  491150 main.go:141] libmachine: (multinode-405494-m02) Calling .DriverName
	I0116 03:09:20.879111  491150 main.go:141] libmachine: (multinode-405494-m02) Calling .GetIP
	I0116 03:09:20.881780  491150 main.go:141] libmachine: (multinode-405494-m02) DBG | domain multinode-405494-m02 has defined MAC address 52:54:00:3c:08:8b in network mk-multinode-405494
	I0116 03:09:20.882092  491150 main.go:141] libmachine: (multinode-405494-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:08:8b", ip: ""} in network mk-multinode-405494: {Iface:virbr1 ExpiryTime:2024-01-16 03:57:47 +0000 UTC Type:0 Mac:52:54:00:3c:08:8b Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-405494-m02 Clientid:01:52:54:00:3c:08:8b}
	I0116 03:09:20.882127  491150 main.go:141] libmachine: (multinode-405494-m02) DBG | domain multinode-405494-m02 has defined IP address 192.168.39.32 and MAC address 52:54:00:3c:08:8b in network mk-multinode-405494
	I0116 03:09:20.884261  491150 out.go:177] * Found network options:
	I0116 03:09:20.885653  491150 out.go:177]   - NO_PROXY=192.168.39.70
	W0116 03:09:20.887009  491150 proxy.go:119] fail to check proxy env: Error ip not in block
	I0116 03:09:20.887066  491150 main.go:141] libmachine: (multinode-405494-m02) Calling .DriverName
	I0116 03:09:20.887885  491150 main.go:141] libmachine: (multinode-405494-m02) Calling .DriverName
	I0116 03:09:20.888131  491150 main.go:141] libmachine: (multinode-405494-m02) Calling .DriverName
	I0116 03:09:20.888281  491150 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 03:09:20.888333  491150 main.go:141] libmachine: (multinode-405494-m02) Calling .GetSSHHostname
	W0116 03:09:20.888344  491150 proxy.go:119] fail to check proxy env: Error ip not in block
	I0116 03:09:20.888431  491150 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 03:09:20.888458  491150 main.go:141] libmachine: (multinode-405494-m02) Calling .GetSSHHostname
	I0116 03:09:20.891012  491150 main.go:141] libmachine: (multinode-405494-m02) DBG | domain multinode-405494-m02 has defined MAC address 52:54:00:3c:08:8b in network mk-multinode-405494
	I0116 03:09:20.891128  491150 main.go:141] libmachine: (multinode-405494-m02) DBG | domain multinode-405494-m02 has defined MAC address 52:54:00:3c:08:8b in network mk-multinode-405494
	I0116 03:09:20.891424  491150 main.go:141] libmachine: (multinode-405494-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:08:8b", ip: ""} in network mk-multinode-405494: {Iface:virbr1 ExpiryTime:2024-01-16 03:57:47 +0000 UTC Type:0 Mac:52:54:00:3c:08:8b Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-405494-m02 Clientid:01:52:54:00:3c:08:8b}
	I0116 03:09:20.891455  491150 main.go:141] libmachine: (multinode-405494-m02) DBG | domain multinode-405494-m02 has defined IP address 192.168.39.32 and MAC address 52:54:00:3c:08:8b in network mk-multinode-405494
	I0116 03:09:20.891698  491150 main.go:141] libmachine: (multinode-405494-m02) Calling .GetSSHPort
	I0116 03:09:20.891701  491150 main.go:141] libmachine: (multinode-405494-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:08:8b", ip: ""} in network mk-multinode-405494: {Iface:virbr1 ExpiryTime:2024-01-16 03:57:47 +0000 UTC Type:0 Mac:52:54:00:3c:08:8b Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-405494-m02 Clientid:01:52:54:00:3c:08:8b}
	I0116 03:09:20.891731  491150 main.go:141] libmachine: (multinode-405494-m02) DBG | domain multinode-405494-m02 has defined IP address 192.168.39.32 and MAC address 52:54:00:3c:08:8b in network mk-multinode-405494
	I0116 03:09:20.891850  491150 main.go:141] libmachine: (multinode-405494-m02) Calling .GetSSHPort
	I0116 03:09:20.891944  491150 main.go:141] libmachine: (multinode-405494-m02) Calling .GetSSHKeyPath
	I0116 03:09:20.892016  491150 main.go:141] libmachine: (multinode-405494-m02) Calling .GetSSHKeyPath
	I0116 03:09:20.892112  491150 main.go:141] libmachine: (multinode-405494-m02) Calling .GetSSHUsername
	I0116 03:09:20.892197  491150 main.go:141] libmachine: (multinode-405494-m02) Calling .GetSSHUsername
	I0116 03:09:20.892242  491150 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/multinode-405494-m02/id_rsa Username:docker}
	I0116 03:09:20.892321  491150 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/multinode-405494-m02/id_rsa Username:docker}
	I0116 03:09:21.133206  491150 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0116 03:09:21.133236  491150 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0116 03:09:21.139893  491150 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0116 03:09:21.139940  491150 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 03:09:21.139997  491150 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 03:09:21.149182  491150 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0116 03:09:21.149207  491150 start.go:475] detecting cgroup driver to use...
	I0116 03:09:21.149280  491150 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 03:09:21.164499  491150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 03:09:21.178089  491150 docker.go:217] disabling cri-docker service (if available) ...
	I0116 03:09:21.178157  491150 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 03:09:21.192368  491150 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 03:09:21.206121  491150 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 03:09:21.343998  491150 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 03:09:21.475094  491150 docker.go:233] disabling docker service ...
	I0116 03:09:21.475187  491150 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 03:09:21.490482  491150 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 03:09:21.504428  491150 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 03:09:21.635709  491150 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 03:09:21.776588  491150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 03:09:21.791228  491150 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 03:09:21.812218  491150 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0116 03:09:21.812261  491150 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0116 03:09:21.812316  491150 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:09:21.825766  491150 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 03:09:21.825843  491150 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:09:21.838174  491150 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:09:21.850198  491150 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:09:21.862311  491150 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 03:09:21.874133  491150 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 03:09:21.884164  491150 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0116 03:09:21.884278  491150 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 03:09:21.894427  491150 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 03:09:22.046711  491150 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 03:09:22.306616  491150 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 03:09:22.306708  491150 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 03:09:22.313261  491150 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0116 03:09:22.313295  491150 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0116 03:09:22.313307  491150 command_runner.go:130] > Device: 16h/22d	Inode: 1215        Links: 1
	I0116 03:09:22.313318  491150 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0116 03:09:22.313329  491150 command_runner.go:130] > Access: 2024-01-16 03:09:22.220732472 +0000
	I0116 03:09:22.313339  491150 command_runner.go:130] > Modify: 2024-01-16 03:09:22.220732472 +0000
	I0116 03:09:22.313355  491150 command_runner.go:130] > Change: 2024-01-16 03:09:22.220732472 +0000
	I0116 03:09:22.313362  491150 command_runner.go:130] >  Birth: -
	I0116 03:09:22.313388  491150 start.go:543] Will wait 60s for crictl version
	I0116 03:09:22.313467  491150 ssh_runner.go:195] Run: which crictl
	I0116 03:09:22.317931  491150 command_runner.go:130] > /usr/bin/crictl
	I0116 03:09:22.318022  491150 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 03:09:22.361812  491150 command_runner.go:130] > Version:  0.1.0
	I0116 03:09:22.361839  491150 command_runner.go:130] > RuntimeName:  cri-o
	I0116 03:09:22.361843  491150 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0116 03:09:22.361849  491150 command_runner.go:130] > RuntimeApiVersion:  v1
	I0116 03:09:22.363251  491150 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0116 03:09:22.363360  491150 ssh_runner.go:195] Run: crio --version
	I0116 03:09:22.409948  491150 command_runner.go:130] > crio version 1.24.1
	I0116 03:09:22.409977  491150 command_runner.go:130] > Version:          1.24.1
	I0116 03:09:22.409986  491150 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0116 03:09:22.409992  491150 command_runner.go:130] > GitTreeState:     dirty
	I0116 03:09:22.410001  491150 command_runner.go:130] > BuildDate:        2023-12-28T22:46:29Z
	I0116 03:09:22.410009  491150 command_runner.go:130] > GoVersion:        go1.19.9
	I0116 03:09:22.410015  491150 command_runner.go:130] > Compiler:         gc
	I0116 03:09:22.410022  491150 command_runner.go:130] > Platform:         linux/amd64
	I0116 03:09:22.410030  491150 command_runner.go:130] > Linkmode:         dynamic
	I0116 03:09:22.410042  491150 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0116 03:09:22.410053  491150 command_runner.go:130] > SeccompEnabled:   true
	I0116 03:09:22.410071  491150 command_runner.go:130] > AppArmorEnabled:  false
	I0116 03:09:22.410249  491150 ssh_runner.go:195] Run: crio --version
	I0116 03:09:22.463967  491150 command_runner.go:130] > crio version 1.24.1
	I0116 03:09:22.463991  491150 command_runner.go:130] > Version:          1.24.1
	I0116 03:09:22.464003  491150 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0116 03:09:22.464011  491150 command_runner.go:130] > GitTreeState:     dirty
	I0116 03:09:22.464021  491150 command_runner.go:130] > BuildDate:        2023-12-28T22:46:29Z
	I0116 03:09:22.464029  491150 command_runner.go:130] > GoVersion:        go1.19.9
	I0116 03:09:22.464051  491150 command_runner.go:130] > Compiler:         gc
	I0116 03:09:22.464060  491150 command_runner.go:130] > Platform:         linux/amd64
	I0116 03:09:22.464069  491150 command_runner.go:130] > Linkmode:         dynamic
	I0116 03:09:22.464085  491150 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0116 03:09:22.464093  491150 command_runner.go:130] > SeccompEnabled:   true
	I0116 03:09:22.464105  491150 command_runner.go:130] > AppArmorEnabled:  false
	I0116 03:09:22.466335  491150 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0116 03:09:22.467971  491150 out.go:177]   - env NO_PROXY=192.168.39.70
	I0116 03:09:22.470005  491150 main.go:141] libmachine: (multinode-405494-m02) Calling .GetIP
	I0116 03:09:22.473303  491150 main.go:141] libmachine: (multinode-405494-m02) DBG | domain multinode-405494-m02 has defined MAC address 52:54:00:3c:08:8b in network mk-multinode-405494
	I0116 03:09:22.473715  491150 main.go:141] libmachine: (multinode-405494-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:08:8b", ip: ""} in network mk-multinode-405494: {Iface:virbr1 ExpiryTime:2024-01-16 03:57:47 +0000 UTC Type:0 Mac:52:54:00:3c:08:8b Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-405494-m02 Clientid:01:52:54:00:3c:08:8b}
	I0116 03:09:22.473760  491150 main.go:141] libmachine: (multinode-405494-m02) DBG | domain multinode-405494-m02 has defined IP address 192.168.39.32 and MAC address 52:54:00:3c:08:8b in network mk-multinode-405494
	I0116 03:09:22.473990  491150 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0116 03:09:22.478228  491150 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0116 03:09:22.478488  491150 certs.go:56] Setting up /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/multinode-405494 for IP: 192.168.39.32
	I0116 03:09:22.478523  491150 certs.go:190] acquiring lock for shared ca certs: {Name:mkab5aeea3e0ed69f3592dc8f418e07c6075b130 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:09:22.478703  491150 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17965-468241/.minikube/ca.key
	I0116 03:09:22.478741  491150 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.key
	I0116 03:09:22.478752  491150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0116 03:09:22.478763  491150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0116 03:09:22.478772  491150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0116 03:09:22.478782  491150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0116 03:09:22.478839  491150 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478.pem (1338 bytes)
	W0116 03:09:22.478868  491150 certs.go:433] ignoring /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478_empty.pem, impossibly tiny 0 bytes
	I0116 03:09:22.478878  491150 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca-key.pem (1679 bytes)
	I0116 03:09:22.478899  491150 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem (1078 bytes)
	I0116 03:09:22.478923  491150 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem (1123 bytes)
	I0116 03:09:22.478944  491150 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem (1679 bytes)
	I0116 03:09:22.478986  491150 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem (1708 bytes)
	I0116 03:09:22.479013  491150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem -> /usr/share/ca-certificates/4754782.pem
	I0116 03:09:22.479026  491150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:09:22.479037  491150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478.pem -> /usr/share/ca-certificates/475478.pem
	I0116 03:09:22.479444  491150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 03:09:22.504697  491150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0116 03:09:22.529283  491150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 03:09:22.552487  491150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 03:09:22.578152  491150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem --> /usr/share/ca-certificates/4754782.pem (1708 bytes)
	I0116 03:09:22.601821  491150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 03:09:22.626910  491150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478.pem --> /usr/share/ca-certificates/475478.pem (1338 bytes)
	I0116 03:09:22.651116  491150 ssh_runner.go:195] Run: openssl version
	I0116 03:09:22.657008  491150 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0116 03:09:22.657227  491150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/475478.pem && ln -fs /usr/share/ca-certificates/475478.pem /etc/ssl/certs/475478.pem"
	I0116 03:09:22.670340  491150 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/475478.pem
	I0116 03:09:22.675630  491150 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan 16 02:44 /usr/share/ca-certificates/475478.pem
	I0116 03:09:22.675668  491150 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 02:44 /usr/share/ca-certificates/475478.pem
	I0116 03:09:22.675727  491150 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/475478.pem
	I0116 03:09:22.681334  491150 command_runner.go:130] > 51391683
	I0116 03:09:22.681655  491150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/475478.pem /etc/ssl/certs/51391683.0"
	I0116 03:09:22.691653  491150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4754782.pem && ln -fs /usr/share/ca-certificates/4754782.pem /etc/ssl/certs/4754782.pem"
	I0116 03:09:22.703452  491150 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4754782.pem
	I0116 03:09:22.708319  491150 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan 16 02:44 /usr/share/ca-certificates/4754782.pem
	I0116 03:09:22.708502  491150 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 02:44 /usr/share/ca-certificates/4754782.pem
	I0116 03:09:22.708574  491150 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4754782.pem
	I0116 03:09:22.714268  491150 command_runner.go:130] > 3ec20f2e
	I0116 03:09:22.714348  491150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4754782.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 03:09:22.725442  491150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 03:09:22.743629  491150 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:09:22.748911  491150 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan 16 02:35 /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:09:22.749158  491150 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 02:35 /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:09:22.749217  491150 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:09:22.754659  491150 command_runner.go:130] > b5213941
	I0116 03:09:22.754977  491150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 03:09:22.765439  491150 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 03:09:22.770120  491150 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0116 03:09:22.770164  491150 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0116 03:09:22.770270  491150 ssh_runner.go:195] Run: crio config
	I0116 03:09:22.823550  491150 command_runner.go:130] ! time="2024-01-16 03:09:22.817800894Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0116 03:09:22.823580  491150 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0116 03:09:22.831575  491150 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0116 03:09:22.831622  491150 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0116 03:09:22.831634  491150 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0116 03:09:22.831640  491150 command_runner.go:130] > #
	I0116 03:09:22.831650  491150 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0116 03:09:22.831666  491150 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0116 03:09:22.831676  491150 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0116 03:09:22.831689  491150 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0116 03:09:22.831697  491150 command_runner.go:130] > # reload'.
	I0116 03:09:22.831707  491150 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0116 03:09:22.831721  491150 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0116 03:09:22.831735  491150 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0116 03:09:22.831747  491150 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0116 03:09:22.831753  491150 command_runner.go:130] > [crio]
	I0116 03:09:22.831766  491150 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0116 03:09:22.831775  491150 command_runner.go:130] > # containers images, in this directory.
	I0116 03:09:22.831785  491150 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0116 03:09:22.831802  491150 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0116 03:09:22.831814  491150 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0116 03:09:22.831827  491150 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0116 03:09:22.831841  491150 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0116 03:09:22.831853  491150 command_runner.go:130] > storage_driver = "overlay"
	I0116 03:09:22.831866  491150 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0116 03:09:22.831875  491150 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0116 03:09:22.831885  491150 command_runner.go:130] > storage_option = [
	I0116 03:09:22.831894  491150 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0116 03:09:22.831903  491150 command_runner.go:130] > ]
	I0116 03:09:22.831912  491150 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0116 03:09:22.831921  491150 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0116 03:09:22.831928  491150 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0116 03:09:22.831934  491150 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0116 03:09:22.831943  491150 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0116 03:09:22.831948  491150 command_runner.go:130] > # always happen on a node reboot
	I0116 03:09:22.831955  491150 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0116 03:09:22.831961  491150 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0116 03:09:22.831967  491150 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0116 03:09:22.831979  491150 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0116 03:09:22.831986  491150 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0116 03:09:22.831994  491150 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0116 03:09:22.832006  491150 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0116 03:09:22.832013  491150 command_runner.go:130] > # internal_wipe = true
	I0116 03:09:22.832020  491150 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0116 03:09:22.832028  491150 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0116 03:09:22.832049  491150 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0116 03:09:22.832057  491150 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0116 03:09:22.832067  491150 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0116 03:09:22.832077  491150 command_runner.go:130] > [crio.api]
	I0116 03:09:22.832087  491150 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0116 03:09:22.832101  491150 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0116 03:09:22.832107  491150 command_runner.go:130] > # IP address on which the stream server will listen.
	I0116 03:09:22.832114  491150 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0116 03:09:22.832121  491150 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0116 03:09:22.832128  491150 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0116 03:09:22.832132  491150 command_runner.go:130] > # stream_port = "0"
	I0116 03:09:22.832140  491150 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0116 03:09:22.832144  491150 command_runner.go:130] > # stream_enable_tls = false
	I0116 03:09:22.832150  491150 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0116 03:09:22.832155  491150 command_runner.go:130] > # stream_idle_timeout = ""
	I0116 03:09:22.832161  491150 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0116 03:09:22.832177  491150 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0116 03:09:22.832184  491150 command_runner.go:130] > # minutes.
	I0116 03:09:22.832188  491150 command_runner.go:130] > # stream_tls_cert = ""
	I0116 03:09:22.832195  491150 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0116 03:09:22.832202  491150 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0116 03:09:22.832208  491150 command_runner.go:130] > # stream_tls_key = ""
	I0116 03:09:22.832215  491150 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0116 03:09:22.832224  491150 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0116 03:09:22.832230  491150 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0116 03:09:22.832236  491150 command_runner.go:130] > # stream_tls_ca = ""
	I0116 03:09:22.832243  491150 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0116 03:09:22.832250  491150 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0116 03:09:22.832257  491150 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0116 03:09:22.832264  491150 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0116 03:09:22.832280  491150 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0116 03:09:22.832293  491150 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0116 03:09:22.832298  491150 command_runner.go:130] > [crio.runtime]
	I0116 03:09:22.832303  491150 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0116 03:09:22.832309  491150 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0116 03:09:22.832314  491150 command_runner.go:130] > # "nofile=1024:2048"
	I0116 03:09:22.832320  491150 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0116 03:09:22.832325  491150 command_runner.go:130] > # default_ulimits = [
	I0116 03:09:22.832329  491150 command_runner.go:130] > # ]
	I0116 03:09:22.832338  491150 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0116 03:09:22.832342  491150 command_runner.go:130] > # no_pivot = false
	I0116 03:09:22.832351  491150 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0116 03:09:22.832357  491150 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0116 03:09:22.832364  491150 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0116 03:09:22.832370  491150 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0116 03:09:22.832378  491150 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0116 03:09:22.832384  491150 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0116 03:09:22.832392  491150 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0116 03:09:22.832396  491150 command_runner.go:130] > # Cgroup setting for conmon
	I0116 03:09:22.832403  491150 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0116 03:09:22.832409  491150 command_runner.go:130] > conmon_cgroup = "pod"
	I0116 03:09:22.832415  491150 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0116 03:09:22.832434  491150 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0116 03:09:22.832442  491150 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0116 03:09:22.832447  491150 command_runner.go:130] > conmon_env = [
	I0116 03:09:22.832453  491150 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0116 03:09:22.832459  491150 command_runner.go:130] > ]
	I0116 03:09:22.832464  491150 command_runner.go:130] > # Additional environment variables to set for all the
	I0116 03:09:22.832472  491150 command_runner.go:130] > # containers. These are overridden if set in the
	I0116 03:09:22.832478  491150 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0116 03:09:22.832487  491150 command_runner.go:130] > # default_env = [
	I0116 03:09:22.832492  491150 command_runner.go:130] > # ]
	I0116 03:09:22.832523  491150 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0116 03:09:22.832534  491150 command_runner.go:130] > # selinux = false
	I0116 03:09:22.832545  491150 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0116 03:09:22.832558  491150 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0116 03:09:22.832570  491150 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0116 03:09:22.832580  491150 command_runner.go:130] > # seccomp_profile = ""
	I0116 03:09:22.832588  491150 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0116 03:09:22.832597  491150 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0116 03:09:22.832612  491150 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0116 03:09:22.832623  491150 command_runner.go:130] > # which might increase security.
	I0116 03:09:22.832630  491150 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0116 03:09:22.832640  491150 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0116 03:09:22.832649  491150 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0116 03:09:22.832655  491150 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0116 03:09:22.832664  491150 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0116 03:09:22.832669  491150 command_runner.go:130] > # This option supports live configuration reload.
	I0116 03:09:22.832676  491150 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0116 03:09:22.832682  491150 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0116 03:09:22.832689  491150 command_runner.go:130] > # the cgroup blockio controller.
	I0116 03:09:22.832694  491150 command_runner.go:130] > # blockio_config_file = ""
	I0116 03:09:22.832703  491150 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0116 03:09:22.832708  491150 command_runner.go:130] > # irqbalance daemon.
	I0116 03:09:22.832715  491150 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0116 03:09:22.832722  491150 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0116 03:09:22.832730  491150 command_runner.go:130] > # This option supports live configuration reload.
	I0116 03:09:22.832734  491150 command_runner.go:130] > # rdt_config_file = ""
	I0116 03:09:22.832747  491150 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0116 03:09:22.832752  491150 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0116 03:09:22.832759  491150 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0116 03:09:22.832765  491150 command_runner.go:130] > # separate_pull_cgroup = ""
	I0116 03:09:22.832772  491150 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0116 03:09:22.832780  491150 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0116 03:09:22.832785  491150 command_runner.go:130] > # will be added.
	I0116 03:09:22.832791  491150 command_runner.go:130] > # default_capabilities = [
	I0116 03:09:22.832795  491150 command_runner.go:130] > # 	"CHOWN",
	I0116 03:09:22.832802  491150 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0116 03:09:22.832806  491150 command_runner.go:130] > # 	"FSETID",
	I0116 03:09:22.832811  491150 command_runner.go:130] > # 	"FOWNER",
	I0116 03:09:22.832815  491150 command_runner.go:130] > # 	"SETGID",
	I0116 03:09:22.832821  491150 command_runner.go:130] > # 	"SETUID",
	I0116 03:09:22.832825  491150 command_runner.go:130] > # 	"SETPCAP",
	I0116 03:09:22.832830  491150 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0116 03:09:22.832836  491150 command_runner.go:130] > # 	"KILL",
	I0116 03:09:22.832839  491150 command_runner.go:130] > # ]
	I0116 03:09:22.832848  491150 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0116 03:09:22.832857  491150 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0116 03:09:22.832861  491150 command_runner.go:130] > # default_sysctls = [
	I0116 03:09:22.832867  491150 command_runner.go:130] > # ]
	I0116 03:09:22.832872  491150 command_runner.go:130] > # List of devices on the host that a
	I0116 03:09:22.832880  491150 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0116 03:09:22.832884  491150 command_runner.go:130] > # allowed_devices = [
	I0116 03:09:22.832889  491150 command_runner.go:130] > # 	"/dev/fuse",
	I0116 03:09:22.832893  491150 command_runner.go:130] > # ]
	I0116 03:09:22.832900  491150 command_runner.go:130] > # List of additional devices. specified as
	I0116 03:09:22.832907  491150 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0116 03:09:22.832916  491150 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0116 03:09:22.832948  491150 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0116 03:09:22.832956  491150 command_runner.go:130] > # additional_devices = [
	I0116 03:09:22.832959  491150 command_runner.go:130] > # ]
	I0116 03:09:22.832964  491150 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0116 03:09:22.832968  491150 command_runner.go:130] > # cdi_spec_dirs = [
	I0116 03:09:22.832972  491150 command_runner.go:130] > # 	"/etc/cdi",
	I0116 03:09:22.832976  491150 command_runner.go:130] > # 	"/var/run/cdi",
	I0116 03:09:22.832979  491150 command_runner.go:130] > # ]
	I0116 03:09:22.832986  491150 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0116 03:09:22.832995  491150 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0116 03:09:22.833000  491150 command_runner.go:130] > # Defaults to false.
	I0116 03:09:22.833006  491150 command_runner.go:130] > # device_ownership_from_security_context = false
	I0116 03:09:22.833014  491150 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0116 03:09:22.833020  491150 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0116 03:09:22.833027  491150 command_runner.go:130] > # hooks_dir = [
	I0116 03:09:22.833031  491150 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0116 03:09:22.833037  491150 command_runner.go:130] > # ]
	I0116 03:09:22.833043  491150 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0116 03:09:22.833052  491150 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0116 03:09:22.833057  491150 command_runner.go:130] > # its default mounts from the following two files:
	I0116 03:09:22.833063  491150 command_runner.go:130] > #
	I0116 03:09:22.833069  491150 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0116 03:09:22.833079  491150 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0116 03:09:22.833087  491150 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0116 03:09:22.833091  491150 command_runner.go:130] > #
	I0116 03:09:22.833098  491150 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0116 03:09:22.833113  491150 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0116 03:09:22.833131  491150 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0116 03:09:22.833139  491150 command_runner.go:130] > #      only add mounts it finds in this file.
	I0116 03:09:22.833143  491150 command_runner.go:130] > #
	I0116 03:09:22.833149  491150 command_runner.go:130] > # default_mounts_file = ""
	I0116 03:09:22.833154  491150 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0116 03:09:22.833163  491150 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0116 03:09:22.833167  491150 command_runner.go:130] > pids_limit = 1024
	I0116 03:09:22.833176  491150 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0116 03:09:22.833182  491150 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0116 03:09:22.833188  491150 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0116 03:09:22.833198  491150 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0116 03:09:22.833203  491150 command_runner.go:130] > # log_size_max = -1
	I0116 03:09:22.833209  491150 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0116 03:09:22.833216  491150 command_runner.go:130] > # log_to_journald = false
	I0116 03:09:22.833222  491150 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0116 03:09:22.833229  491150 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0116 03:09:22.833234  491150 command_runner.go:130] > # Path to directory for container attach sockets.
	I0116 03:09:22.833242  491150 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0116 03:09:22.833247  491150 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0116 03:09:22.833252  491150 command_runner.go:130] > # bind_mount_prefix = ""
	I0116 03:09:22.833258  491150 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0116 03:09:22.833264  491150 command_runner.go:130] > # read_only = false
	I0116 03:09:22.833270  491150 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0116 03:09:22.833278  491150 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0116 03:09:22.833283  491150 command_runner.go:130] > # live configuration reload.
	I0116 03:09:22.833289  491150 command_runner.go:130] > # log_level = "info"
	I0116 03:09:22.833299  491150 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0116 03:09:22.833306  491150 command_runner.go:130] > # This option supports live configuration reload.
	I0116 03:09:22.833311  491150 command_runner.go:130] > # log_filter = ""
	I0116 03:09:22.833319  491150 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0116 03:09:22.833325  491150 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0116 03:09:22.833329  491150 command_runner.go:130] > # separated by comma.
	I0116 03:09:22.833336  491150 command_runner.go:130] > # uid_mappings = ""
	I0116 03:09:22.833345  491150 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0116 03:09:22.833353  491150 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0116 03:09:22.833358  491150 command_runner.go:130] > # separated by comma.
	I0116 03:09:22.833364  491150 command_runner.go:130] > # gid_mappings = ""
	I0116 03:09:22.833372  491150 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0116 03:09:22.833380  491150 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0116 03:09:22.833387  491150 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0116 03:09:22.833394  491150 command_runner.go:130] > # minimum_mappable_uid = -1
	I0116 03:09:22.833400  491150 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0116 03:09:22.833409  491150 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0116 03:09:22.833415  491150 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0116 03:09:22.833422  491150 command_runner.go:130] > # minimum_mappable_gid = -1
	I0116 03:09:22.833428  491150 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0116 03:09:22.833436  491150 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0116 03:09:22.833442  491150 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0116 03:09:22.833448  491150 command_runner.go:130] > # ctr_stop_timeout = 30
	I0116 03:09:22.833454  491150 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0116 03:09:22.833461  491150 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0116 03:09:22.833470  491150 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0116 03:09:22.833477  491150 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0116 03:09:22.833485  491150 command_runner.go:130] > drop_infra_ctr = false
	I0116 03:09:22.833498  491150 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0116 03:09:22.833512  491150 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0116 03:09:22.833526  491150 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0116 03:09:22.833539  491150 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0116 03:09:22.833552  491150 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0116 03:09:22.833562  491150 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0116 03:09:22.833570  491150 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0116 03:09:22.833583  491150 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0116 03:09:22.833593  491150 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0116 03:09:22.833601  491150 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0116 03:09:22.833610  491150 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0116 03:09:22.833617  491150 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0116 03:09:22.833623  491150 command_runner.go:130] > # default_runtime = "runc"
	I0116 03:09:22.833629  491150 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0116 03:09:22.833638  491150 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0116 03:09:22.833655  491150 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0116 03:09:22.833663  491150 command_runner.go:130] > # creation as a file is not desired either.
	I0116 03:09:22.833674  491150 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0116 03:09:22.833682  491150 command_runner.go:130] > # the hostname is being managed dynamically.
	I0116 03:09:22.833686  491150 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0116 03:09:22.833690  491150 command_runner.go:130] > # ]
	I0116 03:09:22.833698  491150 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0116 03:09:22.833705  491150 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0116 03:09:22.833714  491150 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0116 03:09:22.833720  491150 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0116 03:09:22.833726  491150 command_runner.go:130] > #
	I0116 03:09:22.833730  491150 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0116 03:09:22.833737  491150 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0116 03:09:22.833742  491150 command_runner.go:130] > #  runtime_type = "oci"
	I0116 03:09:22.833749  491150 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0116 03:09:22.833754  491150 command_runner.go:130] > #  privileged_without_host_devices = false
	I0116 03:09:22.833760  491150 command_runner.go:130] > #  allowed_annotations = []
	I0116 03:09:22.833764  491150 command_runner.go:130] > # Where:
	I0116 03:09:22.833774  491150 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0116 03:09:22.833781  491150 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0116 03:09:22.833789  491150 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0116 03:09:22.833796  491150 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0116 03:09:22.833802  491150 command_runner.go:130] > #   in $PATH.
	I0116 03:09:22.833808  491150 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0116 03:09:22.833815  491150 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0116 03:09:22.833821  491150 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0116 03:09:22.833827  491150 command_runner.go:130] > #   state.
	I0116 03:09:22.833833  491150 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0116 03:09:22.833841  491150 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0116 03:09:22.833847  491150 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0116 03:09:22.833855  491150 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0116 03:09:22.833861  491150 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0116 03:09:22.833869  491150 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0116 03:09:22.833874  491150 command_runner.go:130] > #   The currently recognized values are:
	I0116 03:09:22.833883  491150 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0116 03:09:22.833890  491150 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0116 03:09:22.833899  491150 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0116 03:09:22.833905  491150 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0116 03:09:22.833915  491150 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0116 03:09:22.833924  491150 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0116 03:09:22.833930  491150 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0116 03:09:22.833938  491150 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0116 03:09:22.833944  491150 command_runner.go:130] > #   should be moved to the container's cgroup
	I0116 03:09:22.833950  491150 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0116 03:09:22.833955  491150 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0116 03:09:22.833962  491150 command_runner.go:130] > runtime_type = "oci"
	I0116 03:09:22.833966  491150 command_runner.go:130] > runtime_root = "/run/runc"
	I0116 03:09:22.833970  491150 command_runner.go:130] > runtime_config_path = ""
	I0116 03:09:22.833974  491150 command_runner.go:130] > monitor_path = ""
	I0116 03:09:22.833980  491150 command_runner.go:130] > monitor_cgroup = ""
	I0116 03:09:22.833984  491150 command_runner.go:130] > monitor_exec_cgroup = ""
	I0116 03:09:22.833993  491150 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0116 03:09:22.833998  491150 command_runner.go:130] > # running containers
	I0116 03:09:22.834005  491150 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0116 03:09:22.834011  491150 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0116 03:09:22.834051  491150 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0116 03:09:22.834059  491150 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0116 03:09:22.834065  491150 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0116 03:09:22.834070  491150 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0116 03:09:22.834075  491150 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0116 03:09:22.834082  491150 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0116 03:09:22.834087  491150 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0116 03:09:22.834093  491150 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0116 03:09:22.834100  491150 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0116 03:09:22.834107  491150 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0116 03:09:22.834114  491150 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0116 03:09:22.834123  491150 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0116 03:09:22.834131  491150 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0116 03:09:22.834139  491150 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0116 03:09:22.834148  491150 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0116 03:09:22.834160  491150 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0116 03:09:22.834166  491150 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0116 03:09:22.834176  491150 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0116 03:09:22.834182  491150 command_runner.go:130] > # Example:
	I0116 03:09:22.834187  491150 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0116 03:09:22.834193  491150 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0116 03:09:22.834198  491150 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0116 03:09:22.834206  491150 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0116 03:09:22.834210  491150 command_runner.go:130] > # cpuset = 0
	I0116 03:09:22.834214  491150 command_runner.go:130] > # cpushares = "0-1"
	I0116 03:09:22.834218  491150 command_runner.go:130] > # Where:
	I0116 03:09:22.834225  491150 command_runner.go:130] > # The workload name is workload-type.
	I0116 03:09:22.834233  491150 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0116 03:09:22.834241  491150 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0116 03:09:22.834246  491150 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0116 03:09:22.834256  491150 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0116 03:09:22.834263  491150 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0116 03:09:22.834268  491150 command_runner.go:130] > # 
	I0116 03:09:22.834275  491150 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0116 03:09:22.834280  491150 command_runner.go:130] > #
	I0116 03:09:22.834286  491150 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0116 03:09:22.834293  491150 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0116 03:09:22.834299  491150 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0116 03:09:22.834307  491150 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0116 03:09:22.834313  491150 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0116 03:09:22.834319  491150 command_runner.go:130] > [crio.image]
	I0116 03:09:22.834325  491150 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0116 03:09:22.834331  491150 command_runner.go:130] > # default_transport = "docker://"
	I0116 03:09:22.834338  491150 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0116 03:09:22.834346  491150 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0116 03:09:22.834351  491150 command_runner.go:130] > # global_auth_file = ""
	I0116 03:09:22.834356  491150 command_runner.go:130] > # The image used to instantiate infra containers.
	I0116 03:09:22.834361  491150 command_runner.go:130] > # This option supports live configuration reload.
	I0116 03:09:22.834368  491150 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0116 03:09:22.834375  491150 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0116 03:09:22.834383  491150 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0116 03:09:22.834388  491150 command_runner.go:130] > # This option supports live configuration reload.
	I0116 03:09:22.834394  491150 command_runner.go:130] > # pause_image_auth_file = ""
	I0116 03:09:22.834401  491150 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0116 03:09:22.834410  491150 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0116 03:09:22.834416  491150 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0116 03:09:22.834424  491150 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0116 03:09:22.834429  491150 command_runner.go:130] > # pause_command = "/pause"
	I0116 03:09:22.834437  491150 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0116 03:09:22.834445  491150 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0116 03:09:22.834454  491150 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0116 03:09:22.834460  491150 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0116 03:09:22.834467  491150 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0116 03:09:22.834472  491150 command_runner.go:130] > # signature_policy = ""
	I0116 03:09:22.834480  491150 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0116 03:09:22.834492  491150 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0116 03:09:22.834498  491150 command_runner.go:130] > # changing them here.
	I0116 03:09:22.834514  491150 command_runner.go:130] > # insecure_registries = [
	I0116 03:09:22.834522  491150 command_runner.go:130] > # ]
	I0116 03:09:22.834536  491150 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0116 03:09:22.834548  491150 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0116 03:09:22.834558  491150 command_runner.go:130] > # image_volumes = "mkdir"
	I0116 03:09:22.834578  491150 command_runner.go:130] > # Temporary directory to use for storing big files
	I0116 03:09:22.834589  491150 command_runner.go:130] > # big_files_temporary_dir = ""
	I0116 03:09:22.834599  491150 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0116 03:09:22.834609  491150 command_runner.go:130] > # CNI plugins.
	I0116 03:09:22.834618  491150 command_runner.go:130] > [crio.network]
	I0116 03:09:22.834625  491150 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0116 03:09:22.834633  491150 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0116 03:09:22.834637  491150 command_runner.go:130] > # cni_default_network = ""
	I0116 03:09:22.834646  491150 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0116 03:09:22.834651  491150 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0116 03:09:22.834659  491150 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0116 03:09:22.834663  491150 command_runner.go:130] > # plugin_dirs = [
	I0116 03:09:22.834670  491150 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0116 03:09:22.834673  491150 command_runner.go:130] > # ]
	I0116 03:09:22.834679  491150 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0116 03:09:22.834683  491150 command_runner.go:130] > [crio.metrics]
	I0116 03:09:22.834688  491150 command_runner.go:130] > # Globally enable or disable metrics support.
	I0116 03:09:22.834696  491150 command_runner.go:130] > enable_metrics = true
	I0116 03:09:22.834701  491150 command_runner.go:130] > # Specify enabled metrics collectors.
	I0116 03:09:22.834706  491150 command_runner.go:130] > # Per default all metrics are enabled.
	I0116 03:09:22.834712  491150 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0116 03:09:22.834721  491150 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0116 03:09:22.834727  491150 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0116 03:09:22.834734  491150 command_runner.go:130] > # metrics_collectors = [
	I0116 03:09:22.834738  491150 command_runner.go:130] > # 	"operations",
	I0116 03:09:22.834745  491150 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0116 03:09:22.834750  491150 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0116 03:09:22.834756  491150 command_runner.go:130] > # 	"operations_errors",
	I0116 03:09:22.834761  491150 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0116 03:09:22.834765  491150 command_runner.go:130] > # 	"image_pulls_by_name",
	I0116 03:09:22.834770  491150 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0116 03:09:22.834775  491150 command_runner.go:130] > # 	"image_pulls_failures",
	I0116 03:09:22.834779  491150 command_runner.go:130] > # 	"image_pulls_successes",
	I0116 03:09:22.834786  491150 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0116 03:09:22.834790  491150 command_runner.go:130] > # 	"image_layer_reuse",
	I0116 03:09:22.834794  491150 command_runner.go:130] > # 	"containers_oom_total",
	I0116 03:09:22.834801  491150 command_runner.go:130] > # 	"containers_oom",
	I0116 03:09:22.834805  491150 command_runner.go:130] > # 	"processes_defunct",
	I0116 03:09:22.834809  491150 command_runner.go:130] > # 	"operations_total",
	I0116 03:09:22.834814  491150 command_runner.go:130] > # 	"operations_latency_seconds",
	I0116 03:09:22.834819  491150 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0116 03:09:22.834823  491150 command_runner.go:130] > # 	"operations_errors_total",
	I0116 03:09:22.834830  491150 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0116 03:09:22.834835  491150 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0116 03:09:22.834841  491150 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0116 03:09:22.834846  491150 command_runner.go:130] > # 	"image_pulls_success_total",
	I0116 03:09:22.834852  491150 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0116 03:09:22.834857  491150 command_runner.go:130] > # 	"containers_oom_count_total",
	I0116 03:09:22.834862  491150 command_runner.go:130] > # ]
	I0116 03:09:22.834868  491150 command_runner.go:130] > # The port on which the metrics server will listen.
	I0116 03:09:22.834874  491150 command_runner.go:130] > # metrics_port = 9090
	I0116 03:09:22.834879  491150 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0116 03:09:22.834886  491150 command_runner.go:130] > # metrics_socket = ""
	I0116 03:09:22.834892  491150 command_runner.go:130] > # The certificate for the secure metrics server.
	I0116 03:09:22.834901  491150 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0116 03:09:22.834907  491150 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0116 03:09:22.834914  491150 command_runner.go:130] > # certificate on any modification event.
	I0116 03:09:22.834918  491150 command_runner.go:130] > # metrics_cert = ""
	I0116 03:09:22.834924  491150 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0116 03:09:22.834931  491150 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0116 03:09:22.834935  491150 command_runner.go:130] > # metrics_key = ""
	I0116 03:09:22.834943  491150 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0116 03:09:22.834947  491150 command_runner.go:130] > [crio.tracing]
	I0116 03:09:22.834955  491150 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0116 03:09:22.834959  491150 command_runner.go:130] > # enable_tracing = false
	I0116 03:09:22.834965  491150 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0116 03:09:22.834970  491150 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0116 03:09:22.834976  491150 command_runner.go:130] > # Number of samples to collect per million spans.
	I0116 03:09:22.834981  491150 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0116 03:09:22.834989  491150 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0116 03:09:22.834993  491150 command_runner.go:130] > [crio.stats]
	I0116 03:09:22.835001  491150 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0116 03:09:22.835007  491150 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0116 03:09:22.835014  491150 command_runner.go:130] > # stats_collection_period = 0
	I0116 03:09:22.835094  491150 cni.go:84] Creating CNI manager for ""
	I0116 03:09:22.835105  491150 cni.go:136] 3 nodes found, recommending kindnet
	I0116 03:09:22.835116  491150 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 03:09:22.835137  491150 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.32 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-405494 NodeName:multinode-405494-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.70"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.32 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0116 03:09:22.835258  491150 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.32
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-405494-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.32
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.70"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 03:09:22.835343  491150 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-405494-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.32
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-405494 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0116 03:09:22.835424  491150 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0116 03:09:22.846334  491150 command_runner.go:130] > kubeadm
	I0116 03:09:22.846364  491150 command_runner.go:130] > kubectl
	I0116 03:09:22.846371  491150 command_runner.go:130] > kubelet
	I0116 03:09:22.846405  491150 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 03:09:22.846487  491150 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0116 03:09:22.856790  491150 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0116 03:09:22.874650  491150 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0116 03:09:22.892395  491150 ssh_runner.go:195] Run: grep 192.168.39.70	control-plane.minikube.internal$ /etc/hosts
	I0116 03:09:22.896913  491150 command_runner.go:130] > 192.168.39.70	control-plane.minikube.internal
	I0116 03:09:22.896989  491150 host.go:66] Checking if "multinode-405494" exists ...
	I0116 03:09:22.897305  491150 config.go:182] Loaded profile config "multinode-405494": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 03:09:22.897443  491150 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:09:22.897493  491150 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:09:22.913053  491150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33731
	I0116 03:09:22.913550  491150 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:09:22.914059  491150 main.go:141] libmachine: Using API Version  1
	I0116 03:09:22.914082  491150 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:09:22.914527  491150 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:09:22.914745  491150 main.go:141] libmachine: (multinode-405494) Calling .DriverName
	I0116 03:09:22.914903  491150 start.go:304] JoinCluster: &{Name:multinode-405494 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.4 ClusterName:multinode-405494 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.70 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.32 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.182 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false i
ngress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 03:09:22.915066  491150 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0116 03:09:22.915091  491150 main.go:141] libmachine: (multinode-405494) Calling .GetSSHHostname
	I0116 03:09:22.917914  491150 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 03:09:22.918328  491150 main.go:141] libmachine: (multinode-405494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:49:7b", ip: ""} in network mk-multinode-405494: {Iface:virbr1 ExpiryTime:2024-01-16 04:06:53 +0000 UTC Type:0 Mac:52:54:00:b0:49:7b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:multinode-405494 Clientid:01:52:54:00:b0:49:7b}
	I0116 03:09:22.918357  491150 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined IP address 192.168.39.70 and MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 03:09:22.918507  491150 main.go:141] libmachine: (multinode-405494) Calling .GetSSHPort
	I0116 03:09:22.918687  491150 main.go:141] libmachine: (multinode-405494) Calling .GetSSHKeyPath
	I0116 03:09:22.918828  491150 main.go:141] libmachine: (multinode-405494) Calling .GetSSHUsername
	I0116 03:09:22.918931  491150 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/multinode-405494/id_rsa Username:docker}
	I0116 03:09:23.118458  491150 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token xu3h4i.3dlx59wkvfhbuybj --discovery-token-ca-cert-hash sha256:ab75761e1119a1709a216b682cd7b1dcce0641115df5f236b54b16e4f66aa044 
	I0116 03:09:23.118545  491150 start.go:317] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:192.168.39.32 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0116 03:09:23.118596  491150 host.go:66] Checking if "multinode-405494" exists ...
	I0116 03:09:23.119134  491150 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:09:23.119206  491150 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:09:23.135130  491150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42427
	I0116 03:09:23.135586  491150 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:09:23.136116  491150 main.go:141] libmachine: Using API Version  1
	I0116 03:09:23.136147  491150 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:09:23.136548  491150 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:09:23.136821  491150 main.go:141] libmachine: (multinode-405494) Calling .DriverName
	I0116 03:09:23.137041  491150 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-405494-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I0116 03:09:23.137066  491150 main.go:141] libmachine: (multinode-405494) Calling .GetSSHHostname
	I0116 03:09:23.140182  491150 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 03:09:23.140718  491150 main.go:141] libmachine: (multinode-405494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:49:7b", ip: ""} in network mk-multinode-405494: {Iface:virbr1 ExpiryTime:2024-01-16 04:06:53 +0000 UTC Type:0 Mac:52:54:00:b0:49:7b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:multinode-405494 Clientid:01:52:54:00:b0:49:7b}
	I0116 03:09:23.140753  491150 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined IP address 192.168.39.70 and MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 03:09:23.141066  491150 main.go:141] libmachine: (multinode-405494) Calling .GetSSHPort
	I0116 03:09:23.141285  491150 main.go:141] libmachine: (multinode-405494) Calling .GetSSHKeyPath
	I0116 03:09:23.141620  491150 main.go:141] libmachine: (multinode-405494) Calling .GetSSHUsername
	I0116 03:09:23.141813  491150 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/multinode-405494/id_rsa Username:docker}
	I0116 03:09:23.322752  491150 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I0116 03:09:23.389812  491150 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-ddd2h, kube-system/kube-proxy-m46rb
	I0116 03:09:26.411331  491150 command_runner.go:130] > node/multinode-405494-m02 cordoned
	I0116 03:09:26.411358  491150 command_runner.go:130] > pod "busybox-5bc68d56bd-pkhcp" has DeletionTimestamp older than 1 seconds, skipping
	I0116 03:09:26.411364  491150 command_runner.go:130] > node/multinode-405494-m02 drained
	I0116 03:09:26.411658  491150 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-405494-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.274575151s)
	I0116 03:09:26.411680  491150 node.go:108] successfully drained node "m02"
	I0116 03:09:26.412226  491150 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17965-468241/kubeconfig
	I0116 03:09:26.412567  491150 kapi.go:59] client config for multinode-405494: &rest.Config{Host:"https://192.168.39.70:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17965-468241/.minikube/profiles/multinode-405494/client.crt", KeyFile:"/home/jenkins/minikube-integration/17965-468241/.minikube/profiles/multinode-405494/client.key", CAFile:"/home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19be0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0116 03:09:26.413199  491150 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0116 03:09:26.413276  491150 round_trippers.go:463] DELETE https://192.168.39.70:8443/api/v1/nodes/multinode-405494-m02
	I0116 03:09:26.413289  491150 round_trippers.go:469] Request Headers:
	I0116 03:09:26.413301  491150 round_trippers.go:473]     Content-Type: application/json
	I0116 03:09:26.413314  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:09:26.413322  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:09:26.425692  491150 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0116 03:09:26.425713  491150 round_trippers.go:577] Response Headers:
	I0116 03:09:26.425721  491150 round_trippers.go:580]     Audit-Id: b4ac6ece-2210-40e3-8476-91ccf78137aa
	I0116 03:09:26.425726  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:09:26.425732  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:09:26.425737  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:09:26.425742  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:09:26.425749  491150 round_trippers.go:580]     Content-Length: 171
	I0116 03:09:26.425756  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:09:26 GMT
	I0116 03:09:26.426116  491150 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-405494-m02","kind":"nodes","uid":"db05f602-4c14-49d7-93c1-517732722bbd"}}
	I0116 03:09:26.426220  491150 node.go:124] successfully deleted node "m02"
	I0116 03:09:26.426239  491150 start.go:321] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:192.168.39.32 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0116 03:09:26.426267  491150 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.32 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0116 03:09:26.426305  491150 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token xu3h4i.3dlx59wkvfhbuybj --discovery-token-ca-cert-hash sha256:ab75761e1119a1709a216b682cd7b1dcce0641115df5f236b54b16e4f66aa044 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-405494-m02"
	I0116 03:09:26.477055  491150 command_runner.go:130] > [preflight] Running pre-flight checks
	I0116 03:09:26.630658  491150 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0116 03:09:26.630699  491150 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0116 03:09:26.702869  491150 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0116 03:09:26.703101  491150 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0116 03:09:26.703284  491150 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0116 03:09:26.843555  491150 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0116 03:09:27.369573  491150 command_runner.go:130] > This node has joined the cluster:
	I0116 03:09:27.369608  491150 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0116 03:09:27.369619  491150 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0116 03:09:27.369630  491150 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0116 03:09:27.372088  491150 command_runner.go:130] ! W0116 03:09:26.471423    2654 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0116 03:09:27.372120  491150 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0116 03:09:27.372133  491150 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0116 03:09:27.372145  491150 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0116 03:09:27.372314  491150 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0116 03:09:27.637563  491150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=6e8fa5f64d0e7272be43ff25ed3826261f0a2578 minikube.k8s.io/name=multinode-405494 minikube.k8s.io/updated_at=2024_01_16T03_09_27_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:09:27.746877  491150 command_runner.go:130] > node/multinode-405494-m02 labeled
	I0116 03:09:27.770608  491150 command_runner.go:130] > node/multinode-405494-m03 labeled
	I0116 03:09:27.772992  491150 start.go:306] JoinCluster complete in 4.858086915s
	I0116 03:09:27.773019  491150 cni.go:84] Creating CNI manager for ""
	I0116 03:09:27.773026  491150 cni.go:136] 3 nodes found, recommending kindnet
	I0116 03:09:27.773075  491150 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0116 03:09:27.779158  491150 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0116 03:09:27.779200  491150 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0116 03:09:27.779210  491150 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0116 03:09:27.779219  491150 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0116 03:09:27.779226  491150 command_runner.go:130] > Access: 2024-01-16 03:06:53.963563216 +0000
	I0116 03:09:27.779234  491150 command_runner.go:130] > Modify: 2023-12-28 22:53:36.000000000 +0000
	I0116 03:09:27.779245  491150 command_runner.go:130] > Change: 2024-01-16 03:06:52.021563216 +0000
	I0116 03:09:27.779252  491150 command_runner.go:130] >  Birth: -
	I0116 03:09:27.779341  491150 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0116 03:09:27.779356  491150 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0116 03:09:27.800939  491150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0116 03:09:28.199604  491150 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0116 03:09:28.199654  491150 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0116 03:09:28.199663  491150 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0116 03:09:28.199669  491150 command_runner.go:130] > daemonset.apps/kindnet configured
	I0116 03:09:28.200244  491150 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17965-468241/kubeconfig
	I0116 03:09:28.200635  491150 kapi.go:59] client config for multinode-405494: &rest.Config{Host:"https://192.168.39.70:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17965-468241/.minikube/profiles/multinode-405494/client.crt", KeyFile:"/home/jenkins/minikube-integration/17965-468241/.minikube/profiles/multinode-405494/client.key", CAFile:"/home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19be0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0116 03:09:28.201135  491150 round_trippers.go:463] GET https://192.168.39.70:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0116 03:09:28.201153  491150 round_trippers.go:469] Request Headers:
	I0116 03:09:28.201164  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:09:28.201170  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:09:28.203950  491150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:09:28.203973  491150 round_trippers.go:577] Response Headers:
	I0116 03:09:28.203980  491150 round_trippers.go:580]     Audit-Id: c172f297-2545-4d1c-a03c-f61ca5d797ff
	I0116 03:09:28.203986  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:09:28.203991  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:09:28.203996  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:09:28.204001  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:09:28.204006  491150 round_trippers.go:580]     Content-Length: 291
	I0116 03:09:28.204011  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:09:28 GMT
	I0116 03:09:28.204054  491150 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"dd77c785-c90f-4789-97cb-f593b7a7a7e2","resourceVersion":"896","creationTimestamp":"2024-01-16T02:57:11Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0116 03:09:28.204163  491150 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-405494" context rescaled to 1 replicas
	I0116 03:09:28.204196  491150 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.39.32 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0116 03:09:28.206231  491150 out.go:177] * Verifying Kubernetes components...
	I0116 03:09:28.207785  491150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:09:28.224012  491150 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17965-468241/kubeconfig
	I0116 03:09:28.224268  491150 kapi.go:59] client config for multinode-405494: &rest.Config{Host:"https://192.168.39.70:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17965-468241/.minikube/profiles/multinode-405494/client.crt", KeyFile:"/home/jenkins/minikube-integration/17965-468241/.minikube/profiles/multinode-405494/client.key", CAFile:"/home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19be0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0116 03:09:28.224538  491150 node_ready.go:35] waiting up to 6m0s for node "multinode-405494-m02" to be "Ready" ...
	I0116 03:09:28.224639  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494-m02
	I0116 03:09:28.224652  491150 round_trippers.go:469] Request Headers:
	I0116 03:09:28.224663  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:09:28.224676  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:09:28.227307  491150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:09:28.227340  491150 round_trippers.go:577] Response Headers:
	I0116 03:09:28.227352  491150 round_trippers.go:580]     Audit-Id: b9d5c3ca-a8bc-4737-a43f-854b7ec7dafa
	I0116 03:09:28.227362  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:09:28.227370  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:09:28.227376  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:09:28.227381  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:09:28.227388  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:09:28 GMT
	I0116 03:09:28.227510  491150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494-m02","uid":"90a1608a-dfc1-4cf0-9a8d-7faa9ad91c37","resourceVersion":"1051","creationTimestamp":"2024-01-16T03:09:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_09_27_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:09:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3993 chars]
	I0116 03:09:28.227878  491150 node_ready.go:49] node "multinode-405494-m02" has status "Ready":"True"
	I0116 03:09:28.227899  491150 node_ready.go:38] duration metric: took 3.340554ms waiting for node "multinode-405494-m02" to be "Ready" ...
	I0116 03:09:28.227911  491150 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:09:28.227993  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods
	I0116 03:09:28.228005  491150 round_trippers.go:469] Request Headers:
	I0116 03:09:28.228013  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:09:28.228025  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:09:28.233191  491150 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0116 03:09:28.233214  491150 round_trippers.go:577] Response Headers:
	I0116 03:09:28.233222  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:09:28 GMT
	I0116 03:09:28.233227  491150 round_trippers.go:580]     Audit-Id: f3efb69e-28ba-45a9-afd7-e1808ced2bfe
	I0116 03:09:28.233233  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:09:28.233238  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:09:28.233243  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:09:28.233248  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:09:28.234327  491150 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1058"},"items":[{"metadata":{"name":"coredns-5dd5756b68-vwqvk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"096151e2-c59c-4dcf-bd29-2029901902c9","resourceVersion":"892","creationTimestamp":"2024-01-16T02:57:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c3967b3a-de7c-4ccd-af3a-dd2a9e8b71f8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c3967b3a-de7c-4ccd-af3a-dd2a9e8b71f8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 82198 chars]
	I0116 03:09:28.236787  491150 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-vwqvk" in "kube-system" namespace to be "Ready" ...
	I0116 03:09:28.236873  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-vwqvk
	I0116 03:09:28.236882  491150 round_trippers.go:469] Request Headers:
	I0116 03:09:28.236889  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:09:28.236895  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:09:28.239663  491150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:09:28.239679  491150 round_trippers.go:577] Response Headers:
	I0116 03:09:28.239686  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:09:28.239691  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:09:28.239696  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:09:28.239701  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:09:28.239707  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:09:28 GMT
	I0116 03:09:28.239712  491150 round_trippers.go:580]     Audit-Id: 44676814-bf2b-4556-b440-421882c732b8
	I0116 03:09:28.240020  491150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-vwqvk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"096151e2-c59c-4dcf-bd29-2029901902c9","resourceVersion":"892","creationTimestamp":"2024-01-16T02:57:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c3967b3a-de7c-4ccd-af3a-dd2a9e8b71f8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c3967b3a-de7c-4ccd-af3a-dd2a9e8b71f8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6264 chars]
	I0116 03:09:28.240490  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494
	I0116 03:09:28.240504  491150 round_trippers.go:469] Request Headers:
	I0116 03:09:28.240512  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:09:28.240518  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:09:28.242814  491150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:09:28.242829  491150 round_trippers.go:577] Response Headers:
	I0116 03:09:28.242836  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:09:28.242841  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:09:28.242847  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:09:28 GMT
	I0116 03:09:28.242852  491150 round_trippers.go:580]     Audit-Id: b5df24ba-ada4-4d73-af8c-45b2bf16d186
	I0116 03:09:28.242857  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:09:28.242862  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:09:28.243233  491150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","resourceVersion":"915","creationTimestamp":"2024-01-16T02:57:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_57_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:57:07Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I0116 03:09:28.243543  491150 pod_ready.go:92] pod "coredns-5dd5756b68-vwqvk" in "kube-system" namespace has status "Ready":"True"
	I0116 03:09:28.243559  491150 pod_ready.go:81] duration metric: took 6.748977ms waiting for pod "coredns-5dd5756b68-vwqvk" in "kube-system" namespace to be "Ready" ...
	I0116 03:09:28.243570  491150 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-405494" in "kube-system" namespace to be "Ready" ...
	I0116 03:09:28.243665  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-405494
	I0116 03:09:28.243677  491150 round_trippers.go:469] Request Headers:
	I0116 03:09:28.243684  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:09:28.243690  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:09:28.246028  491150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:09:28.246044  491150 round_trippers.go:577] Response Headers:
	I0116 03:09:28.246051  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:09:28 GMT
	I0116 03:09:28.246056  491150 round_trippers.go:580]     Audit-Id: 4eaab43f-e6b2-4831-a3c1-4318ceb4ebe8
	I0116 03:09:28.246061  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:09:28.246066  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:09:28.246071  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:09:28.246076  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:09:28.246216  491150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-405494","namespace":"kube-system","uid":"3f839da7-c0c0-4546-8848-1557cbf50722","resourceVersion":"866","creationTimestamp":"2024-01-16T02:57:12Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.70:2379","kubernetes.io/config.hash":"3a9e67d0e87fe64d9531234ab850034d","kubernetes.io/config.mirror":"3a9e67d0e87fe64d9531234ab850034d","kubernetes.io/config.seen":"2024-01-16T02:57:11.711592151Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5843 chars]
	I0116 03:09:28.246587  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494
	I0116 03:09:28.246599  491150 round_trippers.go:469] Request Headers:
	I0116 03:09:28.246607  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:09:28.246615  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:09:28.249508  491150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:09:28.249566  491150 round_trippers.go:577] Response Headers:
	I0116 03:09:28.249595  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:09:28.249605  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:09:28.249622  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:09:28.249636  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:09:28.249650  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:09:28 GMT
	I0116 03:09:28.249662  491150 round_trippers.go:580]     Audit-Id: 45b95ffb-c662-4038-adec-ea57db7d9565
	I0116 03:09:28.249815  491150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","resourceVersion":"915","creationTimestamp":"2024-01-16T02:57:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_57_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:57:07Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I0116 03:09:28.250253  491150 pod_ready.go:92] pod "etcd-multinode-405494" in "kube-system" namespace has status "Ready":"True"
	I0116 03:09:28.250274  491150 pod_ready.go:81] duration metric: took 6.697152ms waiting for pod "etcd-multinode-405494" in "kube-system" namespace to be "Ready" ...
	I0116 03:09:28.250300  491150 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-405494" in "kube-system" namespace to be "Ready" ...
	I0116 03:09:28.250384  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-405494
	I0116 03:09:28.250394  491150 round_trippers.go:469] Request Headers:
	I0116 03:09:28.250405  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:09:28.250415  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:09:28.254839  491150 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 03:09:28.254863  491150 round_trippers.go:577] Response Headers:
	I0116 03:09:28.254871  491150 round_trippers.go:580]     Audit-Id: 25790c6c-bfea-408b-b2fe-0a7f781a9e70
	I0116 03:09:28.254876  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:09:28.254886  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:09:28.254891  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:09:28.254901  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:09:28.254912  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:09:28 GMT
	I0116 03:09:28.255143  491150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-405494","namespace":"kube-system","uid":"e242d3cf-6cf7-4b47-8d3e-a83e484554a1","resourceVersion":"882","creationTimestamp":"2024-01-16T02:57:09Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.70:8443","kubernetes.io/config.hash":"04bffd1a6d3ee0aae068c41e37830c9b","kubernetes.io/config.mirror":"04bffd1a6d3ee0aae068c41e37830c9b","kubernetes.io/config.seen":"2024-01-16T02:57:02.078602539Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7380 chars]
	I0116 03:09:28.255740  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494
	I0116 03:09:28.255763  491150 round_trippers.go:469] Request Headers:
	I0116 03:09:28.255774  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:09:28.255784  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:09:28.258830  491150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:09:28.258853  491150 round_trippers.go:577] Response Headers:
	I0116 03:09:28.258861  491150 round_trippers.go:580]     Audit-Id: 6b9e86c0-f57e-4a39-8e23-9fc7b1d0a062
	I0116 03:09:28.258866  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:09:28.258872  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:09:28.258877  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:09:28.258882  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:09:28.258887  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:09:28 GMT
	I0116 03:09:28.259158  491150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","resourceVersion":"915","creationTimestamp":"2024-01-16T02:57:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_57_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:57:07Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I0116 03:09:28.259672  491150 pod_ready.go:92] pod "kube-apiserver-multinode-405494" in "kube-system" namespace has status "Ready":"True"
	I0116 03:09:28.259700  491150 pod_ready.go:81] duration metric: took 9.387361ms waiting for pod "kube-apiserver-multinode-405494" in "kube-system" namespace to be "Ready" ...
	I0116 03:09:28.259718  491150 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-405494" in "kube-system" namespace to be "Ready" ...
	I0116 03:09:28.259803  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-405494
	I0116 03:09:28.259815  491150 round_trippers.go:469] Request Headers:
	I0116 03:09:28.259826  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:09:28.259836  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:09:28.271177  491150 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0116 03:09:28.271209  491150 round_trippers.go:577] Response Headers:
	I0116 03:09:28.271220  491150 round_trippers.go:580]     Audit-Id: bb7fd285-1e49-41fd-b3ed-51f9910f1c76
	I0116 03:09:28.271227  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:09:28.271236  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:09:28.271245  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:09:28.271252  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:09:28.271262  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:09:28 GMT
	I0116 03:09:28.271431  491150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-405494","namespace":"kube-system","uid":"0833b412-8909-4660-8e16-19701683358e","resourceVersion":"880","creationTimestamp":"2024-01-16T02:57:12Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"9eb78063d6e219f3cc5940494bdab4b2","kubernetes.io/config.mirror":"9eb78063d6e219f3cc5940494bdab4b2","kubernetes.io/config.seen":"2024-01-16T02:57:11.711589408Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6950 chars]
	I0116 03:09:28.271999  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494
	I0116 03:09:28.272018  491150 round_trippers.go:469] Request Headers:
	I0116 03:09:28.272030  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:09:28.272058  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:09:28.274750  491150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:09:28.274775  491150 round_trippers.go:577] Response Headers:
	I0116 03:09:28.274786  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:09:28 GMT
	I0116 03:09:28.274795  491150 round_trippers.go:580]     Audit-Id: a0b6a16a-39b4-4f66-bef7-a921d5811cd6
	I0116 03:09:28.274803  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:09:28.274811  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:09:28.274823  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:09:28.274834  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:09:28.275046  491150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","resourceVersion":"915","creationTimestamp":"2024-01-16T02:57:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_57_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:57:07Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I0116 03:09:28.275448  491150 pod_ready.go:92] pod "kube-controller-manager-multinode-405494" in "kube-system" namespace has status "Ready":"True"
	I0116 03:09:28.275467  491150 pod_ready.go:81] duration metric: took 15.741847ms waiting for pod "kube-controller-manager-multinode-405494" in "kube-system" namespace to be "Ready" ...
	I0116 03:09:28.275477  491150 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gg8kv" in "kube-system" namespace to be "Ready" ...
	I0116 03:09:28.424835  491150 request.go:629] Waited for 149.262658ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gg8kv
	I0116 03:09:28.424933  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gg8kv
	I0116 03:09:28.424938  491150 round_trippers.go:469] Request Headers:
	I0116 03:09:28.424948  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:09:28.424956  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:09:28.428570  491150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:09:28.428597  491150 round_trippers.go:577] Response Headers:
	I0116 03:09:28.428604  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:09:28.428610  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:09:28.428615  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:09:28 GMT
	I0116 03:09:28.428620  491150 round_trippers.go:580]     Audit-Id: 534928ff-2db1-4f10-b5f2-67c9452011d0
	I0116 03:09:28.428625  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:09:28.428632  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:09:28.428775  491150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-gg8kv","generateName":"kube-proxy-","namespace":"kube-system","uid":"32841b88-1b06-46ed-b4ce-f73301ec0a85","resourceVersion":"838","creationTimestamp":"2024-01-16T02:57:23Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"fdad84fd-2af6-4360-a899-0f70f257935c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fdad84fd-2af6-4360-a899-0f70f257935c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5514 chars]
	I0116 03:09:28.625581  491150 request.go:629] Waited for 196.353621ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/nodes/multinode-405494
	I0116 03:09:28.625665  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494
	I0116 03:09:28.625671  491150 round_trippers.go:469] Request Headers:
	I0116 03:09:28.625679  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:09:28.625686  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:09:28.629910  491150 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 03:09:28.629946  491150 round_trippers.go:577] Response Headers:
	I0116 03:09:28.629956  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:09:28.629964  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:09:28.629971  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:09:28 GMT
	I0116 03:09:28.629981  491150 round_trippers.go:580]     Audit-Id: ad5ba6f0-78bb-4a64-9345-44cf19ff073f
	I0116 03:09:28.629988  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:09:28.629996  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:09:28.630300  491150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","resourceVersion":"915","creationTimestamp":"2024-01-16T02:57:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_57_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:57:07Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I0116 03:09:28.630681  491150 pod_ready.go:92] pod "kube-proxy-gg8kv" in "kube-system" namespace has status "Ready":"True"
	I0116 03:09:28.630708  491150 pod_ready.go:81] duration metric: took 355.225529ms waiting for pod "kube-proxy-gg8kv" in "kube-system" namespace to be "Ready" ...
	I0116 03:09:28.630720  491150 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ghscp" in "kube-system" namespace to be "Ready" ...
	I0116 03:09:28.825647  491150 request.go:629] Waited for 194.817252ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ghscp
	I0116 03:09:28.825750  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ghscp
	I0116 03:09:28.825761  491150 round_trippers.go:469] Request Headers:
	I0116 03:09:28.825772  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:09:28.825783  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:09:28.829561  491150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:09:28.829603  491150 round_trippers.go:577] Response Headers:
	I0116 03:09:28.829618  491150 round_trippers.go:580]     Audit-Id: 5895ab65-2317-4cab-9e73-99df53334913
	I0116 03:09:28.829626  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:09:28.829633  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:09:28.829641  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:09:28.829647  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:09:28.829654  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:09:28 GMT
	I0116 03:09:28.830145  491150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-ghscp","generateName":"kube-proxy-","namespace":"kube-system","uid":"62b6191a-df8d-444d-9176-3f265fd2084d","resourceVersion":"708","creationTimestamp":"2024-01-16T02:58:49Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"fdad84fd-2af6-4360-a899-0f70f257935c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:58:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fdad84fd-2af6-4360-a899-0f70f257935c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I0116 03:09:29.025247  491150 request.go:629] Waited for 194.498585ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/nodes/multinode-405494-m03
	I0116 03:09:29.025334  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494-m03
	I0116 03:09:29.025346  491150 round_trippers.go:469] Request Headers:
	I0116 03:09:29.025358  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:09:29.025371  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:09:29.028460  491150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:09:29.028494  491150 round_trippers.go:577] Response Headers:
	I0116 03:09:29.028506  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:09:29.028514  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:09:29.028522  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:09:29 GMT
	I0116 03:09:29.028530  491150 round_trippers.go:580]     Audit-Id: 693b5ff5-668d-4861-ae65-30c8777203b0
	I0116 03:09:29.028538  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:09:29.028546  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:09:29.028709  491150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494-m03","uid":"f017bb37-2198-45f8-8920-a0a10585c3e0","resourceVersion":"1052","creationTimestamp":"2024-01-16T02:59:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_09_27_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:59:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annota
tions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detac [truncated 3966 chars]
	I0116 03:09:29.029118  491150 pod_ready.go:92] pod "kube-proxy-ghscp" in "kube-system" namespace has status "Ready":"True"
	I0116 03:09:29.029155  491150 pod_ready.go:81] duration metric: took 398.419438ms waiting for pod "kube-proxy-ghscp" in "kube-system" namespace to be "Ready" ...
	I0116 03:09:29.029178  491150 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-m46rb" in "kube-system" namespace to be "Ready" ...
	I0116 03:09:29.225079  491150 request.go:629] Waited for 195.815914ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m46rb
	I0116 03:09:29.225173  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m46rb
	I0116 03:09:29.225181  491150 round_trippers.go:469] Request Headers:
	I0116 03:09:29.225192  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:09:29.225202  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:09:29.228772  491150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:09:29.228799  491150 round_trippers.go:577] Response Headers:
	I0116 03:09:29.228810  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:09:29.228817  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:09:29.228825  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:09:29.228832  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:09:29 GMT
	I0116 03:09:29.228841  491150 round_trippers.go:580]     Audit-Id: 359e1ed9-3575-40af-beae-e61a448c1987
	I0116 03:09:29.228849  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:09:29.229145  491150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-m46rb","generateName":"kube-proxy-","namespace":"kube-system","uid":"960fb4d4-836f-42c5-9d56-03daae9f5a12","resourceVersion":"1071","creationTimestamp":"2024-01-16T02:58:02Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"fdad84fd-2af6-4360-a899-0f70f257935c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:58:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fdad84fd-2af6-4360-a899-0f70f257935c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5727 chars]
	I0116 03:09:29.425050  491150 request.go:629] Waited for 195.314694ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/nodes/multinode-405494-m02
	I0116 03:09:29.425133  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494-m02
	I0116 03:09:29.425141  491150 round_trippers.go:469] Request Headers:
	I0116 03:09:29.425152  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:09:29.425169  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:09:29.428374  491150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:09:29.428395  491150 round_trippers.go:577] Response Headers:
	I0116 03:09:29.428404  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:09:29.428413  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:09:29.428421  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:09:29 GMT
	I0116 03:09:29.428430  491150 round_trippers.go:580]     Audit-Id: bf9cb43e-722b-4166-990e-59adc6b10dd2
	I0116 03:09:29.428438  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:09:29.428446  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:09:29.428721  491150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494-m02","uid":"90a1608a-dfc1-4cf0-9a8d-7faa9ad91c37","resourceVersion":"1051","creationTimestamp":"2024-01-16T03:09:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_09_27_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:09:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3993 chars]
	I0116 03:09:29.429012  491150 pod_ready.go:92] pod "kube-proxy-m46rb" in "kube-system" namespace has status "Ready":"True"
	I0116 03:09:29.429029  491150 pod_ready.go:81] duration metric: took 399.84413ms waiting for pod "kube-proxy-m46rb" in "kube-system" namespace to be "Ready" ...
	I0116 03:09:29.429040  491150 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-405494" in "kube-system" namespace to be "Ready" ...
	I0116 03:09:29.625172  491150 request.go:629] Waited for 196.030394ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-405494
	I0116 03:09:29.625247  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-405494
	I0116 03:09:29.625252  491150 round_trippers.go:469] Request Headers:
	I0116 03:09:29.625273  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:09:29.625279  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:09:29.628272  491150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:09:29.628295  491150 round_trippers.go:577] Response Headers:
	I0116 03:09:29.628302  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:09:29.628311  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:09:29.628320  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:09:29 GMT
	I0116 03:09:29.628328  491150 round_trippers.go:580]     Audit-Id: 07e27fcd-dc0e-4a85-8c64-0744df927047
	I0116 03:09:29.628337  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:09:29.628346  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:09:29.628521  491150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-405494","namespace":"kube-system","uid":"70c980cb-4ff9-45f5-960f-d8afa355229c","resourceVersion":"884","creationTimestamp":"2024-01-16T02:57:11Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"65069d20830c0b10a3d28746871e48c2","kubernetes.io/config.mirror":"65069d20830c0b10a3d28746871e48c2","kubernetes.io/config.seen":"2024-01-16T02:57:02.078604553Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4680 chars]
	I0116 03:09:29.825281  491150 request.go:629] Waited for 196.304807ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/nodes/multinode-405494
	I0116 03:09:29.825375  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494
	I0116 03:09:29.825386  491150 round_trippers.go:469] Request Headers:
	I0116 03:09:29.825396  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:09:29.825403  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:09:29.828648  491150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:09:29.828699  491150 round_trippers.go:577] Response Headers:
	I0116 03:09:29.828711  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:09:29.828719  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:09:29.828726  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:09:29.828734  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:09:29 GMT
	I0116 03:09:29.828741  491150 round_trippers.go:580]     Audit-Id: 4d13768d-fa7b-40cd-ac78-4224d355ed8c
	I0116 03:09:29.828748  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:09:29.829086  491150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","resourceVersion":"915","creationTimestamp":"2024-01-16T02:57:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_57_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:57:07Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I0116 03:09:29.829419  491150 pod_ready.go:92] pod "kube-scheduler-multinode-405494" in "kube-system" namespace has status "Ready":"True"
	I0116 03:09:29.829435  491150 pod_ready.go:81] duration metric: took 400.389898ms waiting for pod "kube-scheduler-multinode-405494" in "kube-system" namespace to be "Ready" ...
	I0116 03:09:29.829447  491150 pod_ready.go:38] duration metric: took 1.601521568s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:09:29.829465  491150 system_svc.go:44] waiting for kubelet service to be running ....
	I0116 03:09:29.829520  491150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:09:29.844346  491150 system_svc.go:56] duration metric: took 14.867842ms WaitForService to wait for kubelet.
	I0116 03:09:29.844389  491150 kubeadm.go:581] duration metric: took 1.640167479s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0116 03:09:29.844416  491150 node_conditions.go:102] verifying NodePressure condition ...
	I0116 03:09:30.024844  491150 request.go:629] Waited for 180.328537ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/nodes
	I0116 03:09:30.024967  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes
	I0116 03:09:30.024976  491150 round_trippers.go:469] Request Headers:
	I0116 03:09:30.024989  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:09:30.025001  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:09:30.030903  491150 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0116 03:09:30.030940  491150 round_trippers.go:577] Response Headers:
	I0116 03:09:30.030953  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:09:30.030963  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:09:30.030971  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:09:30 GMT
	I0116 03:09:30.030979  491150 round_trippers.go:580]     Audit-Id: 7bad59c5-ea49-4000-9050-a8809b8c9321
	I0116 03:09:30.030987  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:09:30.031006  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:09:30.032257  491150 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1073"},"items":[{"metadata":{"name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","resourceVersion":"915","creationTimestamp":"2024-01-16T02:57:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_57_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedField
s":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time": [truncated 16209 chars]
	I0116 03:09:30.033159  491150 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 03:09:30.033207  491150 node_conditions.go:123] node cpu capacity is 2
	I0116 03:09:30.033223  491150 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 03:09:30.033230  491150 node_conditions.go:123] node cpu capacity is 2
	I0116 03:09:30.033236  491150 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 03:09:30.033241  491150 node_conditions.go:123] node cpu capacity is 2
	I0116 03:09:30.033248  491150 node_conditions.go:105] duration metric: took 188.825681ms to run NodePressure ...
	I0116 03:09:30.033265  491150 start.go:228] waiting for startup goroutines ...
	I0116 03:09:30.033305  491150 start.go:242] writing updated cluster config ...
	I0116 03:09:30.033907  491150 config.go:182] Loaded profile config "multinode-405494": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 03:09:30.034004  491150 profile.go:148] Saving config to /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/multinode-405494/config.json ...
	I0116 03:09:30.038514  491150 out.go:177] * Starting worker node multinode-405494-m03 in cluster multinode-405494
	I0116 03:09:30.040387  491150 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 03:09:30.040432  491150 cache.go:56] Caching tarball of preloaded images
	I0116 03:09:30.040582  491150 preload.go:174] Found /home/jenkins/minikube-integration/17965-468241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0116 03:09:30.040593  491150 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0116 03:09:30.040718  491150 profile.go:148] Saving config to /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/multinode-405494/config.json ...
	I0116 03:09:30.040929  491150 start.go:365] acquiring machines lock for multinode-405494-m03: {Name:mk901e5fae8c1a578d989b520053c709bdbdcf06 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0116 03:09:30.040979  491150 start.go:369] acquired machines lock for "multinode-405494-m03" in 27.66µs
	I0116 03:09:30.041000  491150 start.go:96] Skipping create...Using existing machine configuration
	I0116 03:09:30.041008  491150 fix.go:54] fixHost starting: m03
	I0116 03:09:30.041384  491150 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:09:30.041429  491150 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:09:30.059723  491150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40463
	I0116 03:09:30.060306  491150 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:09:30.060936  491150 main.go:141] libmachine: Using API Version  1
	I0116 03:09:30.060962  491150 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:09:30.061367  491150 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:09:30.061620  491150 main.go:141] libmachine: (multinode-405494-m03) Calling .DriverName
	I0116 03:09:30.061844  491150 main.go:141] libmachine: (multinode-405494-m03) Calling .GetState
	I0116 03:09:30.063742  491150 fix.go:102] recreateIfNeeded on multinode-405494-m03: state=Running err=<nil>
	W0116 03:09:30.063767  491150 fix.go:128] unexpected machine state, will restart: <nil>
	I0116 03:09:30.066327  491150 out.go:177] * Updating the running kvm2 "multinode-405494-m03" VM ...
	I0116 03:09:30.067902  491150 machine.go:88] provisioning docker machine ...
	I0116 03:09:30.067932  491150 main.go:141] libmachine: (multinode-405494-m03) Calling .DriverName
	I0116 03:09:30.068268  491150 main.go:141] libmachine: (multinode-405494-m03) Calling .GetMachineName
	I0116 03:09:30.068457  491150 buildroot.go:166] provisioning hostname "multinode-405494-m03"
	I0116 03:09:30.068476  491150 main.go:141] libmachine: (multinode-405494-m03) Calling .GetMachineName
	I0116 03:09:30.068592  491150 main.go:141] libmachine: (multinode-405494-m03) Calling .GetSSHHostname
	I0116 03:09:30.071320  491150 main.go:141] libmachine: (multinode-405494-m03) DBG | domain multinode-405494-m03 has defined MAC address 52:54:00:d2:81:36 in network mk-multinode-405494
	I0116 03:09:30.071913  491150 main.go:141] libmachine: (multinode-405494-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:81:36", ip: ""} in network mk-multinode-405494: {Iface:virbr1 ExpiryTime:2024-01-16 03:59:26 +0000 UTC Type:0 Mac:52:54:00:d2:81:36 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:multinode-405494-m03 Clientid:01:52:54:00:d2:81:36}
	I0116 03:09:30.071946  491150 main.go:141] libmachine: (multinode-405494-m03) DBG | domain multinode-405494-m03 has defined IP address 192.168.39.182 and MAC address 52:54:00:d2:81:36 in network mk-multinode-405494
	I0116 03:09:30.072222  491150 main.go:141] libmachine: (multinode-405494-m03) Calling .GetSSHPort
	I0116 03:09:30.072444  491150 main.go:141] libmachine: (multinode-405494-m03) Calling .GetSSHKeyPath
	I0116 03:09:30.072653  491150 main.go:141] libmachine: (multinode-405494-m03) Calling .GetSSHKeyPath
	I0116 03:09:30.072819  491150 main.go:141] libmachine: (multinode-405494-m03) Calling .GetSSHUsername
	I0116 03:09:30.073043  491150 main.go:141] libmachine: Using SSH client type: native
	I0116 03:09:30.073425  491150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.182 22 <nil> <nil>}
	I0116 03:09:30.073439  491150 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-405494-m03 && echo "multinode-405494-m03" | sudo tee /etc/hostname
	I0116 03:09:30.217114  491150 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-405494-m03
	
	I0116 03:09:30.217170  491150 main.go:141] libmachine: (multinode-405494-m03) Calling .GetSSHHostname
	I0116 03:09:30.220304  491150 main.go:141] libmachine: (multinode-405494-m03) DBG | domain multinode-405494-m03 has defined MAC address 52:54:00:d2:81:36 in network mk-multinode-405494
	I0116 03:09:30.220807  491150 main.go:141] libmachine: (multinode-405494-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:81:36", ip: ""} in network mk-multinode-405494: {Iface:virbr1 ExpiryTime:2024-01-16 03:59:26 +0000 UTC Type:0 Mac:52:54:00:d2:81:36 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:multinode-405494-m03 Clientid:01:52:54:00:d2:81:36}
	I0116 03:09:30.220842  491150 main.go:141] libmachine: (multinode-405494-m03) DBG | domain multinode-405494-m03 has defined IP address 192.168.39.182 and MAC address 52:54:00:d2:81:36 in network mk-multinode-405494
	I0116 03:09:30.221039  491150 main.go:141] libmachine: (multinode-405494-m03) Calling .GetSSHPort
	I0116 03:09:30.221299  491150 main.go:141] libmachine: (multinode-405494-m03) Calling .GetSSHKeyPath
	I0116 03:09:30.221513  491150 main.go:141] libmachine: (multinode-405494-m03) Calling .GetSSHKeyPath
	I0116 03:09:30.221694  491150 main.go:141] libmachine: (multinode-405494-m03) Calling .GetSSHUsername
	I0116 03:09:30.221894  491150 main.go:141] libmachine: Using SSH client type: native
	I0116 03:09:30.222301  491150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.182 22 <nil> <nil>}
	I0116 03:09:30.222324  491150 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-405494-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-405494-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-405494-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 03:09:30.345613  491150 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 03:09:30.345649  491150 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17965-468241/.minikube CaCertPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17965-468241/.minikube}
	I0116 03:09:30.345672  491150 buildroot.go:174] setting up certificates
	I0116 03:09:30.345687  491150 provision.go:83] configureAuth start
	I0116 03:09:30.345701  491150 main.go:141] libmachine: (multinode-405494-m03) Calling .GetMachineName
	I0116 03:09:30.346030  491150 main.go:141] libmachine: (multinode-405494-m03) Calling .GetIP
	I0116 03:09:30.349013  491150 main.go:141] libmachine: (multinode-405494-m03) DBG | domain multinode-405494-m03 has defined MAC address 52:54:00:d2:81:36 in network mk-multinode-405494
	I0116 03:09:30.349444  491150 main.go:141] libmachine: (multinode-405494-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:81:36", ip: ""} in network mk-multinode-405494: {Iface:virbr1 ExpiryTime:2024-01-16 03:59:26 +0000 UTC Type:0 Mac:52:54:00:d2:81:36 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:multinode-405494-m03 Clientid:01:52:54:00:d2:81:36}
	I0116 03:09:30.349477  491150 main.go:141] libmachine: (multinode-405494-m03) DBG | domain multinode-405494-m03 has defined IP address 192.168.39.182 and MAC address 52:54:00:d2:81:36 in network mk-multinode-405494
	I0116 03:09:30.349743  491150 main.go:141] libmachine: (multinode-405494-m03) Calling .GetSSHHostname
	I0116 03:09:30.352333  491150 main.go:141] libmachine: (multinode-405494-m03) DBG | domain multinode-405494-m03 has defined MAC address 52:54:00:d2:81:36 in network mk-multinode-405494
	I0116 03:09:30.352727  491150 main.go:141] libmachine: (multinode-405494-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:81:36", ip: ""} in network mk-multinode-405494: {Iface:virbr1 ExpiryTime:2024-01-16 03:59:26 +0000 UTC Type:0 Mac:52:54:00:d2:81:36 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:multinode-405494-m03 Clientid:01:52:54:00:d2:81:36}
	I0116 03:09:30.352752  491150 main.go:141] libmachine: (multinode-405494-m03) DBG | domain multinode-405494-m03 has defined IP address 192.168.39.182 and MAC address 52:54:00:d2:81:36 in network mk-multinode-405494
	I0116 03:09:30.352962  491150 provision.go:138] copyHostCerts
	I0116 03:09:30.353001  491150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem
	I0116 03:09:30.353044  491150 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem, removing ...
	I0116 03:09:30.353056  491150 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem
	I0116 03:09:30.353144  491150 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem (1078 bytes)
	I0116 03:09:30.353275  491150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem
	I0116 03:09:30.353307  491150 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem, removing ...
	I0116 03:09:30.353314  491150 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem
	I0116 03:09:30.353357  491150 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem (1123 bytes)
	I0116 03:09:30.353417  491150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem
	I0116 03:09:30.353444  491150 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem, removing ...
	I0116 03:09:30.353454  491150 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem
	I0116 03:09:30.353489  491150 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem (1679 bytes)
	I0116 03:09:30.353556  491150 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca-key.pem org=jenkins.multinode-405494-m03 san=[192.168.39.182 192.168.39.182 localhost 127.0.0.1 minikube multinode-405494-m03]
	I0116 03:09:30.517667  491150 provision.go:172] copyRemoteCerts
	I0116 03:09:30.517736  491150 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 03:09:30.517765  491150 main.go:141] libmachine: (multinode-405494-m03) Calling .GetSSHHostname
	I0116 03:09:30.521370  491150 main.go:141] libmachine: (multinode-405494-m03) DBG | domain multinode-405494-m03 has defined MAC address 52:54:00:d2:81:36 in network mk-multinode-405494
	I0116 03:09:30.521813  491150 main.go:141] libmachine: (multinode-405494-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:81:36", ip: ""} in network mk-multinode-405494: {Iface:virbr1 ExpiryTime:2024-01-16 03:59:26 +0000 UTC Type:0 Mac:52:54:00:d2:81:36 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:multinode-405494-m03 Clientid:01:52:54:00:d2:81:36}
	I0116 03:09:30.521856  491150 main.go:141] libmachine: (multinode-405494-m03) DBG | domain multinode-405494-m03 has defined IP address 192.168.39.182 and MAC address 52:54:00:d2:81:36 in network mk-multinode-405494
	I0116 03:09:30.522099  491150 main.go:141] libmachine: (multinode-405494-m03) Calling .GetSSHPort
	I0116 03:09:30.522350  491150 main.go:141] libmachine: (multinode-405494-m03) Calling .GetSSHKeyPath
	I0116 03:09:30.522600  491150 main.go:141] libmachine: (multinode-405494-m03) Calling .GetSSHUsername
	I0116 03:09:30.522930  491150 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/multinode-405494-m03/id_rsa Username:docker}
	I0116 03:09:30.615927  491150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0116 03:09:30.616013  491150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0116 03:09:30.641765  491150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0116 03:09:30.641851  491150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0116 03:09:30.666558  491150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0116 03:09:30.666647  491150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0116 03:09:30.692791  491150 provision.go:86] duration metric: configureAuth took 347.090355ms
	I0116 03:09:30.692824  491150 buildroot.go:189] setting minikube options for container-runtime
	I0116 03:09:30.693108  491150 config.go:182] Loaded profile config "multinode-405494": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 03:09:30.693216  491150 main.go:141] libmachine: (multinode-405494-m03) Calling .GetSSHHostname
	I0116 03:09:30.696949  491150 main.go:141] libmachine: (multinode-405494-m03) DBG | domain multinode-405494-m03 has defined MAC address 52:54:00:d2:81:36 in network mk-multinode-405494
	I0116 03:09:30.697444  491150 main.go:141] libmachine: (multinode-405494-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:81:36", ip: ""} in network mk-multinode-405494: {Iface:virbr1 ExpiryTime:2024-01-16 03:59:26 +0000 UTC Type:0 Mac:52:54:00:d2:81:36 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:multinode-405494-m03 Clientid:01:52:54:00:d2:81:36}
	I0116 03:09:30.697483  491150 main.go:141] libmachine: (multinode-405494-m03) DBG | domain multinode-405494-m03 has defined IP address 192.168.39.182 and MAC address 52:54:00:d2:81:36 in network mk-multinode-405494
	I0116 03:09:30.697751  491150 main.go:141] libmachine: (multinode-405494-m03) Calling .GetSSHPort
	I0116 03:09:30.697974  491150 main.go:141] libmachine: (multinode-405494-m03) Calling .GetSSHKeyPath
	I0116 03:09:30.698186  491150 main.go:141] libmachine: (multinode-405494-m03) Calling .GetSSHKeyPath
	I0116 03:09:30.698370  491150 main.go:141] libmachine: (multinode-405494-m03) Calling .GetSSHUsername
	I0116 03:09:30.698580  491150 main.go:141] libmachine: Using SSH client type: native
	I0116 03:09:30.698901  491150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.182 22 <nil> <nil>}
	I0116 03:09:30.698918  491150 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 03:11:01.257230  491150 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 03:11:01.257270  491150 machine.go:91] provisioned docker machine in 1m31.189352881s
	I0116 03:11:01.257283  491150 start.go:300] post-start starting for "multinode-405494-m03" (driver="kvm2")
	I0116 03:11:01.257296  491150 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 03:11:01.257317  491150 main.go:141] libmachine: (multinode-405494-m03) Calling .DriverName
	I0116 03:11:01.257675  491150 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 03:11:01.257716  491150 main.go:141] libmachine: (multinode-405494-m03) Calling .GetSSHHostname
	I0116 03:11:01.260696  491150 main.go:141] libmachine: (multinode-405494-m03) DBG | domain multinode-405494-m03 has defined MAC address 52:54:00:d2:81:36 in network mk-multinode-405494
	I0116 03:11:01.261096  491150 main.go:141] libmachine: (multinode-405494-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:81:36", ip: ""} in network mk-multinode-405494: {Iface:virbr1 ExpiryTime:2024-01-16 03:59:26 +0000 UTC Type:0 Mac:52:54:00:d2:81:36 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:multinode-405494-m03 Clientid:01:52:54:00:d2:81:36}
	I0116 03:11:01.261133  491150 main.go:141] libmachine: (multinode-405494-m03) DBG | domain multinode-405494-m03 has defined IP address 192.168.39.182 and MAC address 52:54:00:d2:81:36 in network mk-multinode-405494
	I0116 03:11:01.261308  491150 main.go:141] libmachine: (multinode-405494-m03) Calling .GetSSHPort
	I0116 03:11:01.261526  491150 main.go:141] libmachine: (multinode-405494-m03) Calling .GetSSHKeyPath
	I0116 03:11:01.261708  491150 main.go:141] libmachine: (multinode-405494-m03) Calling .GetSSHUsername
	I0116 03:11:01.261898  491150 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/multinode-405494-m03/id_rsa Username:docker}
	I0116 03:11:01.354448  491150 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 03:11:01.359136  491150 command_runner.go:130] > NAME=Buildroot
	I0116 03:11:01.359157  491150 command_runner.go:130] > VERSION=2021.02.12-1-g19d536a-dirty
	I0116 03:11:01.359161  491150 command_runner.go:130] > ID=buildroot
	I0116 03:11:01.359166  491150 command_runner.go:130] > VERSION_ID=2021.02.12
	I0116 03:11:01.359170  491150 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0116 03:11:01.359197  491150 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 03:11:01.359208  491150 filesync.go:126] Scanning /home/jenkins/minikube-integration/17965-468241/.minikube/addons for local assets ...
	I0116 03:11:01.359293  491150 filesync.go:126] Scanning /home/jenkins/minikube-integration/17965-468241/.minikube/files for local assets ...
	I0116 03:11:01.359362  491150 filesync.go:149] local asset: /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem -> 4754782.pem in /etc/ssl/certs
	I0116 03:11:01.359375  491150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem -> /etc/ssl/certs/4754782.pem
	I0116 03:11:01.359455  491150 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 03:11:01.368224  491150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem --> /etc/ssl/certs/4754782.pem (1708 bytes)
	I0116 03:11:01.392450  491150 start.go:303] post-start completed in 135.150605ms
	I0116 03:11:01.392483  491150 fix.go:56] fixHost completed within 1m31.351473809s
	I0116 03:11:01.392521  491150 main.go:141] libmachine: (multinode-405494-m03) Calling .GetSSHHostname
	I0116 03:11:01.395562  491150 main.go:141] libmachine: (multinode-405494-m03) DBG | domain multinode-405494-m03 has defined MAC address 52:54:00:d2:81:36 in network mk-multinode-405494
	I0116 03:11:01.395891  491150 main.go:141] libmachine: (multinode-405494-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:81:36", ip: ""} in network mk-multinode-405494: {Iface:virbr1 ExpiryTime:2024-01-16 03:59:26 +0000 UTC Type:0 Mac:52:54:00:d2:81:36 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:multinode-405494-m03 Clientid:01:52:54:00:d2:81:36}
	I0116 03:11:01.395928  491150 main.go:141] libmachine: (multinode-405494-m03) DBG | domain multinode-405494-m03 has defined IP address 192.168.39.182 and MAC address 52:54:00:d2:81:36 in network mk-multinode-405494
	I0116 03:11:01.396042  491150 main.go:141] libmachine: (multinode-405494-m03) Calling .GetSSHPort
	I0116 03:11:01.396273  491150 main.go:141] libmachine: (multinode-405494-m03) Calling .GetSSHKeyPath
	I0116 03:11:01.396438  491150 main.go:141] libmachine: (multinode-405494-m03) Calling .GetSSHKeyPath
	I0116 03:11:01.396552  491150 main.go:141] libmachine: (multinode-405494-m03) Calling .GetSSHUsername
	I0116 03:11:01.396721  491150 main.go:141] libmachine: Using SSH client type: native
	I0116 03:11:01.397029  491150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.182 22 <nil> <nil>}
	I0116 03:11:01.397039  491150 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0116 03:11:01.517516  491150 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705374661.511710563
	
	I0116 03:11:01.517543  491150 fix.go:206] guest clock: 1705374661.511710563
	I0116 03:11:01.517551  491150 fix.go:219] Guest: 2024-01-16 03:11:01.511710563 +0000 UTC Remote: 2024-01-16 03:11:01.392487253 +0000 UTC m=+557.797637628 (delta=119.22331ms)
	I0116 03:11:01.517568  491150 fix.go:190] guest clock delta is within tolerance: 119.22331ms
	I0116 03:11:01.517573  491150 start.go:83] releasing machines lock for "multinode-405494-m03", held for 1m31.476584415s
	I0116 03:11:01.517595  491150 main.go:141] libmachine: (multinode-405494-m03) Calling .DriverName
	I0116 03:11:01.517937  491150 main.go:141] libmachine: (multinode-405494-m03) Calling .GetIP
	I0116 03:11:01.520981  491150 main.go:141] libmachine: (multinode-405494-m03) DBG | domain multinode-405494-m03 has defined MAC address 52:54:00:d2:81:36 in network mk-multinode-405494
	I0116 03:11:01.521472  491150 main.go:141] libmachine: (multinode-405494-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:81:36", ip: ""} in network mk-multinode-405494: {Iface:virbr1 ExpiryTime:2024-01-16 03:59:26 +0000 UTC Type:0 Mac:52:54:00:d2:81:36 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:multinode-405494-m03 Clientid:01:52:54:00:d2:81:36}
	I0116 03:11:01.521514  491150 main.go:141] libmachine: (multinode-405494-m03) DBG | domain multinode-405494-m03 has defined IP address 192.168.39.182 and MAC address 52:54:00:d2:81:36 in network mk-multinode-405494
	I0116 03:11:01.523584  491150 out.go:177] * Found network options:
	I0116 03:11:01.525546  491150 out.go:177]   - NO_PROXY=192.168.39.70,192.168.39.32
	W0116 03:11:01.527156  491150 proxy.go:119] fail to check proxy env: Error ip not in block
	W0116 03:11:01.527181  491150 proxy.go:119] fail to check proxy env: Error ip not in block
	I0116 03:11:01.527203  491150 main.go:141] libmachine: (multinode-405494-m03) Calling .DriverName
	I0116 03:11:01.528018  491150 main.go:141] libmachine: (multinode-405494-m03) Calling .DriverName
	I0116 03:11:01.528240  491150 main.go:141] libmachine: (multinode-405494-m03) Calling .DriverName
	I0116 03:11:01.528367  491150 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	W0116 03:11:01.528388  491150 proxy.go:119] fail to check proxy env: Error ip not in block
	W0116 03:11:01.528412  491150 proxy.go:119] fail to check proxy env: Error ip not in block
	I0116 03:11:01.528419  491150 main.go:141] libmachine: (multinode-405494-m03) Calling .GetSSHHostname
	I0116 03:11:01.528482  491150 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 03:11:01.528526  491150 main.go:141] libmachine: (multinode-405494-m03) Calling .GetSSHHostname
	I0116 03:11:01.531418  491150 main.go:141] libmachine: (multinode-405494-m03) DBG | domain multinode-405494-m03 has defined MAC address 52:54:00:d2:81:36 in network mk-multinode-405494
	I0116 03:11:01.531566  491150 main.go:141] libmachine: (multinode-405494-m03) DBG | domain multinode-405494-m03 has defined MAC address 52:54:00:d2:81:36 in network mk-multinode-405494
	I0116 03:11:01.531954  491150 main.go:141] libmachine: (multinode-405494-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:81:36", ip: ""} in network mk-multinode-405494: {Iface:virbr1 ExpiryTime:2024-01-16 03:59:26 +0000 UTC Type:0 Mac:52:54:00:d2:81:36 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:multinode-405494-m03 Clientid:01:52:54:00:d2:81:36}
	I0116 03:11:01.531986  491150 main.go:141] libmachine: (multinode-405494-m03) DBG | domain multinode-405494-m03 has defined IP address 192.168.39.182 and MAC address 52:54:00:d2:81:36 in network mk-multinode-405494
	I0116 03:11:01.532067  491150 main.go:141] libmachine: (multinode-405494-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:81:36", ip: ""} in network mk-multinode-405494: {Iface:virbr1 ExpiryTime:2024-01-16 03:59:26 +0000 UTC Type:0 Mac:52:54:00:d2:81:36 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:multinode-405494-m03 Clientid:01:52:54:00:d2:81:36}
	I0116 03:11:01.532113  491150 main.go:141] libmachine: (multinode-405494-m03) DBG | domain multinode-405494-m03 has defined IP address 192.168.39.182 and MAC address 52:54:00:d2:81:36 in network mk-multinode-405494
	I0116 03:11:01.532121  491150 main.go:141] libmachine: (multinode-405494-m03) Calling .GetSSHPort
	I0116 03:11:01.532347  491150 main.go:141] libmachine: (multinode-405494-m03) Calling .GetSSHKeyPath
	I0116 03:11:01.532369  491150 main.go:141] libmachine: (multinode-405494-m03) Calling .GetSSHPort
	I0116 03:11:01.532542  491150 main.go:141] libmachine: (multinode-405494-m03) Calling .GetSSHUsername
	I0116 03:11:01.532617  491150 main.go:141] libmachine: (multinode-405494-m03) Calling .GetSSHKeyPath
	I0116 03:11:01.532777  491150 main.go:141] libmachine: (multinode-405494-m03) Calling .GetSSHUsername
	I0116 03:11:01.532802  491150 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/multinode-405494-m03/id_rsa Username:docker}
	I0116 03:11:01.532927  491150 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/multinode-405494-m03/id_rsa Username:docker}
	I0116 03:11:01.768676  491150 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0116 03:11:01.768676  491150 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0116 03:11:01.775824  491150 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0116 03:11:01.776183  491150 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 03:11:01.776246  491150 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 03:11:01.784852  491150 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0116 03:11:01.784877  491150 start.go:475] detecting cgroup driver to use...
	I0116 03:11:01.784959  491150 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 03:11:01.799791  491150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 03:11:01.812494  491150 docker.go:217] disabling cri-docker service (if available) ...
	I0116 03:11:01.812569  491150 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 03:11:01.828992  491150 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 03:11:01.843417  491150 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 03:11:01.995900  491150 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 03:11:02.135655  491150 docker.go:233] disabling docker service ...
	I0116 03:11:02.135735  491150 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 03:11:02.152839  491150 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 03:11:02.166687  491150 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 03:11:02.305113  491150 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 03:11:02.442300  491150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 03:11:02.456675  491150 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 03:11:02.476605  491150 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0116 03:11:02.477218  491150 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0116 03:11:02.477287  491150 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:11:02.491421  491150 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 03:11:02.491495  491150 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:11:02.502322  491150 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:11:02.529469  491150 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:11:02.542150  491150 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 03:11:02.554676  491150 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 03:11:02.565550  491150 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0116 03:11:02.565667  491150 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 03:11:02.575415  491150 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 03:11:02.709576  491150 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 03:11:02.954102  491150 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 03:11:02.954181  491150 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 03:11:02.959612  491150 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0116 03:11:02.959642  491150 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0116 03:11:02.959653  491150 command_runner.go:130] > Device: 16h/22d	Inode: 1218        Links: 1
	I0116 03:11:02.959665  491150 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0116 03:11:02.959673  491150 command_runner.go:130] > Access: 2024-01-16 03:11:02.882362901 +0000
	I0116 03:11:02.959684  491150 command_runner.go:130] > Modify: 2024-01-16 03:11:02.867361987 +0000
	I0116 03:11:02.959691  491150 command_runner.go:130] > Change: 2024-01-16 03:11:02.867361987 +0000
	I0116 03:11:02.959701  491150 command_runner.go:130] >  Birth: -
	I0116 03:11:02.959826  491150 start.go:543] Will wait 60s for crictl version
	I0116 03:11:02.959885  491150 ssh_runner.go:195] Run: which crictl
	I0116 03:11:02.963954  491150 command_runner.go:130] > /usr/bin/crictl
	I0116 03:11:02.964170  491150 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 03:11:03.006611  491150 command_runner.go:130] > Version:  0.1.0
	I0116 03:11:03.006647  491150 command_runner.go:130] > RuntimeName:  cri-o
	I0116 03:11:03.006654  491150 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0116 03:11:03.006848  491150 command_runner.go:130] > RuntimeApiVersion:  v1
	I0116 03:11:03.008289  491150 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0116 03:11:03.008379  491150 ssh_runner.go:195] Run: crio --version
	I0116 03:11:03.063416  491150 command_runner.go:130] > crio version 1.24.1
	I0116 03:11:03.063438  491150 command_runner.go:130] > Version:          1.24.1
	I0116 03:11:03.063445  491150 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0116 03:11:03.063449  491150 command_runner.go:130] > GitTreeState:     dirty
	I0116 03:11:03.063455  491150 command_runner.go:130] > BuildDate:        2023-12-28T22:46:29Z
	I0116 03:11:03.063460  491150 command_runner.go:130] > GoVersion:        go1.19.9
	I0116 03:11:03.063464  491150 command_runner.go:130] > Compiler:         gc
	I0116 03:11:03.063468  491150 command_runner.go:130] > Platform:         linux/amd64
	I0116 03:11:03.063474  491150 command_runner.go:130] > Linkmode:         dynamic
	I0116 03:11:03.063481  491150 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0116 03:11:03.063485  491150 command_runner.go:130] > SeccompEnabled:   true
	I0116 03:11:03.063489  491150 command_runner.go:130] > AppArmorEnabled:  false
	I0116 03:11:03.063728  491150 ssh_runner.go:195] Run: crio --version
	I0116 03:11:03.112608  491150 command_runner.go:130] > crio version 1.24.1
	I0116 03:11:03.112647  491150 command_runner.go:130] > Version:          1.24.1
	I0116 03:11:03.112659  491150 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0116 03:11:03.112666  491150 command_runner.go:130] > GitTreeState:     dirty
	I0116 03:11:03.112676  491150 command_runner.go:130] > BuildDate:        2023-12-28T22:46:29Z
	I0116 03:11:03.112683  491150 command_runner.go:130] > GoVersion:        go1.19.9
	I0116 03:11:03.112690  491150 command_runner.go:130] > Compiler:         gc
	I0116 03:11:03.112697  491150 command_runner.go:130] > Platform:         linux/amd64
	I0116 03:11:03.112706  491150 command_runner.go:130] > Linkmode:         dynamic
	I0116 03:11:03.112721  491150 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0116 03:11:03.112731  491150 command_runner.go:130] > SeccompEnabled:   true
	I0116 03:11:03.112738  491150 command_runner.go:130] > AppArmorEnabled:  false
	I0116 03:11:03.115099  491150 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0116 03:11:03.116679  491150 out.go:177]   - env NO_PROXY=192.168.39.70
	I0116 03:11:03.118455  491150 out.go:177]   - env NO_PROXY=192.168.39.70,192.168.39.32
	I0116 03:11:03.119908  491150 main.go:141] libmachine: (multinode-405494-m03) Calling .GetIP
	I0116 03:11:03.122902  491150 main.go:141] libmachine: (multinode-405494-m03) DBG | domain multinode-405494-m03 has defined MAC address 52:54:00:d2:81:36 in network mk-multinode-405494
	I0116 03:11:03.123280  491150 main.go:141] libmachine: (multinode-405494-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:81:36", ip: ""} in network mk-multinode-405494: {Iface:virbr1 ExpiryTime:2024-01-16 03:59:26 +0000 UTC Type:0 Mac:52:54:00:d2:81:36 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:multinode-405494-m03 Clientid:01:52:54:00:d2:81:36}
	I0116 03:11:03.123309  491150 main.go:141] libmachine: (multinode-405494-m03) DBG | domain multinode-405494-m03 has defined IP address 192.168.39.182 and MAC address 52:54:00:d2:81:36 in network mk-multinode-405494
	I0116 03:11:03.123590  491150 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0116 03:11:03.128227  491150 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0116 03:11:03.128486  491150 certs.go:56] Setting up /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/multinode-405494 for IP: 192.168.39.182
	I0116 03:11:03.128533  491150 certs.go:190] acquiring lock for shared ca certs: {Name:mkab5aeea3e0ed69f3592dc8f418e07c6075b130 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:11:03.128694  491150 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17965-468241/.minikube/ca.key
	I0116 03:11:03.128732  491150 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.key
	I0116 03:11:03.128746  491150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0116 03:11:03.128761  491150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0116 03:11:03.128773  491150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0116 03:11:03.128784  491150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0116 03:11:03.128839  491150 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478.pem (1338 bytes)
	W0116 03:11:03.128866  491150 certs.go:433] ignoring /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478_empty.pem, impossibly tiny 0 bytes
	I0116 03:11:03.128877  491150 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca-key.pem (1679 bytes)
	I0116 03:11:03.128903  491150 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem (1078 bytes)
	I0116 03:11:03.128928  491150 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem (1123 bytes)
	I0116 03:11:03.128950  491150 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem (1679 bytes)
	I0116 03:11:03.128990  491150 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem (1708 bytes)
	I0116 03:11:03.129024  491150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem -> /usr/share/ca-certificates/4754782.pem
	I0116 03:11:03.129038  491150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:11:03.129049  491150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478.pem -> /usr/share/ca-certificates/475478.pem
	I0116 03:11:03.129397  491150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 03:11:03.156306  491150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0116 03:11:03.181238  491150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 03:11:03.206051  491150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 03:11:03.230438  491150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem --> /usr/share/ca-certificates/4754782.pem (1708 bytes)
	I0116 03:11:03.254169  491150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 03:11:03.279979  491150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478.pem --> /usr/share/ca-certificates/475478.pem (1338 bytes)
	I0116 03:11:03.307242  491150 ssh_runner.go:195] Run: openssl version
	I0116 03:11:03.313271  491150 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0116 03:11:03.313361  491150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4754782.pem && ln -fs /usr/share/ca-certificates/4754782.pem /etc/ssl/certs/4754782.pem"
	I0116 03:11:03.325216  491150 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4754782.pem
	I0116 03:11:03.330230  491150 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan 16 02:44 /usr/share/ca-certificates/4754782.pem
	I0116 03:11:03.330503  491150 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 02:44 /usr/share/ca-certificates/4754782.pem
	I0116 03:11:03.330559  491150 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4754782.pem
	I0116 03:11:03.336270  491150 command_runner.go:130] > 3ec20f2e
	I0116 03:11:03.336607  491150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4754782.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 03:11:03.346330  491150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 03:11:03.356257  491150 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:11:03.360941  491150 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan 16 02:35 /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:11:03.361168  491150 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 02:35 /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:11:03.361229  491150 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:11:03.366493  491150 command_runner.go:130] > b5213941
	I0116 03:11:03.366807  491150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 03:11:03.376419  491150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/475478.pem && ln -fs /usr/share/ca-certificates/475478.pem /etc/ssl/certs/475478.pem"
	I0116 03:11:03.386609  491150 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/475478.pem
	I0116 03:11:03.391052  491150 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan 16 02:44 /usr/share/ca-certificates/475478.pem
	I0116 03:11:03.391204  491150 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 02:44 /usr/share/ca-certificates/475478.pem
	I0116 03:11:03.391273  491150 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/475478.pem
	I0116 03:11:03.397827  491150 command_runner.go:130] > 51391683
	I0116 03:11:03.397924  491150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/475478.pem /etc/ssl/certs/51391683.0"
	I0116 03:11:03.409580  491150 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 03:11:03.414012  491150 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0116 03:11:03.414343  491150 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0116 03:11:03.414458  491150 ssh_runner.go:195] Run: crio config
	I0116 03:11:03.483202  491150 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0116 03:11:03.483228  491150 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0116 03:11:03.483235  491150 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0116 03:11:03.483238  491150 command_runner.go:130] > #
	I0116 03:11:03.483245  491150 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0116 03:11:03.483252  491150 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0116 03:11:03.483260  491150 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0116 03:11:03.483269  491150 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0116 03:11:03.483272  491150 command_runner.go:130] > # reload'.
	I0116 03:11:03.483279  491150 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0116 03:11:03.483284  491150 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0116 03:11:03.483290  491150 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0116 03:11:03.483296  491150 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0116 03:11:03.483300  491150 command_runner.go:130] > [crio]
	I0116 03:11:03.483306  491150 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0116 03:11:03.483311  491150 command_runner.go:130] > # containers images, in this directory.
	I0116 03:11:03.483319  491150 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0116 03:11:03.483329  491150 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0116 03:11:03.483337  491150 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0116 03:11:03.483342  491150 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0116 03:11:03.483351  491150 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0116 03:11:03.483356  491150 command_runner.go:130] > storage_driver = "overlay"
	I0116 03:11:03.483369  491150 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0116 03:11:03.483382  491150 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0116 03:11:03.483392  491150 command_runner.go:130] > storage_option = [
	I0116 03:11:03.483400  491150 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0116 03:11:03.483407  491150 command_runner.go:130] > ]
	I0116 03:11:03.483416  491150 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0116 03:11:03.483428  491150 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0116 03:11:03.483435  491150 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0116 03:11:03.483445  491150 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0116 03:11:03.483453  491150 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0116 03:11:03.483464  491150 command_runner.go:130] > # always happen on a node reboot
	I0116 03:11:03.483472  491150 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0116 03:11:03.483483  491150 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0116 03:11:03.483493  491150 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0116 03:11:03.483505  491150 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0116 03:11:03.483512  491150 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0116 03:11:03.483520  491150 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0116 03:11:03.483529  491150 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0116 03:11:03.483535  491150 command_runner.go:130] > # internal_wipe = true
	I0116 03:11:03.483547  491150 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0116 03:11:03.483565  491150 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0116 03:11:03.483578  491150 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0116 03:11:03.483590  491150 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0116 03:11:03.483598  491150 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0116 03:11:03.483601  491150 command_runner.go:130] > [crio.api]
	I0116 03:11:03.483615  491150 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0116 03:11:03.483622  491150 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0116 03:11:03.483628  491150 command_runner.go:130] > # IP address on which the stream server will listen.
	I0116 03:11:03.483633  491150 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0116 03:11:03.483642  491150 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0116 03:11:03.483650  491150 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0116 03:11:03.483654  491150 command_runner.go:130] > # stream_port = "0"
	I0116 03:11:03.483662  491150 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0116 03:11:03.483666  491150 command_runner.go:130] > # stream_enable_tls = false
	I0116 03:11:03.483674  491150 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0116 03:11:03.483679  491150 command_runner.go:130] > # stream_idle_timeout = ""
	I0116 03:11:03.483686  491150 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0116 03:11:03.483694  491150 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0116 03:11:03.483701  491150 command_runner.go:130] > # minutes.
	I0116 03:11:03.483705  491150 command_runner.go:130] > # stream_tls_cert = ""
	I0116 03:11:03.483714  491150 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0116 03:11:03.483723  491150 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0116 03:11:03.483727  491150 command_runner.go:130] > # stream_tls_key = ""
	I0116 03:11:03.483734  491150 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0116 03:11:03.483749  491150 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0116 03:11:03.483772  491150 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0116 03:11:03.483803  491150 command_runner.go:130] > # stream_tls_ca = ""
	I0116 03:11:03.483815  491150 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0116 03:11:03.483823  491150 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0116 03:11:03.483842  491150 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0116 03:11:03.483853  491150 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0116 03:11:03.483875  491150 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0116 03:11:03.483888  491150 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0116 03:11:03.483898  491150 command_runner.go:130] > [crio.runtime]
	I0116 03:11:03.483911  491150 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0116 03:11:03.483924  491150 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0116 03:11:03.483936  491150 command_runner.go:130] > # "nofile=1024:2048"
	I0116 03:11:03.483949  491150 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0116 03:11:03.483960  491150 command_runner.go:130] > # default_ulimits = [
	I0116 03:11:03.483969  491150 command_runner.go:130] > # ]
	I0116 03:11:03.483981  491150 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0116 03:11:03.483988  491150 command_runner.go:130] > # no_pivot = false
	I0116 03:11:03.483994  491150 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0116 03:11:03.484004  491150 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0116 03:11:03.484017  491150 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0116 03:11:03.484030  491150 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0116 03:11:03.484062  491150 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0116 03:11:03.484073  491150 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0116 03:11:03.484081  491150 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0116 03:11:03.484090  491150 command_runner.go:130] > # Cgroup setting for conmon
	I0116 03:11:03.484102  491150 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0116 03:11:03.484109  491150 command_runner.go:130] > conmon_cgroup = "pod"
	I0116 03:11:03.484119  491150 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0116 03:11:03.484132  491150 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0116 03:11:03.484144  491150 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0116 03:11:03.484155  491150 command_runner.go:130] > conmon_env = [
	I0116 03:11:03.484165  491150 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0116 03:11:03.484173  491150 command_runner.go:130] > ]
	I0116 03:11:03.484183  491150 command_runner.go:130] > # Additional environment variables to set for all the
	I0116 03:11:03.484194  491150 command_runner.go:130] > # containers. These are overridden if set in the
	I0116 03:11:03.484202  491150 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0116 03:11:03.484209  491150 command_runner.go:130] > # default_env = [
	I0116 03:11:03.484216  491150 command_runner.go:130] > # ]
	I0116 03:11:03.484229  491150 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0116 03:11:03.484236  491150 command_runner.go:130] > # selinux = false
	I0116 03:11:03.484250  491150 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0116 03:11:03.484264  491150 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0116 03:11:03.484276  491150 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0116 03:11:03.484286  491150 command_runner.go:130] > # seccomp_profile = ""
	I0116 03:11:03.484297  491150 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0116 03:11:03.484307  491150 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0116 03:11:03.484321  491150 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0116 03:11:03.484347  491150 command_runner.go:130] > # which might increase security.
	I0116 03:11:03.484358  491150 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0116 03:11:03.484371  491150 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0116 03:11:03.484385  491150 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0116 03:11:03.484398  491150 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0116 03:11:03.484407  491150 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0116 03:11:03.484415  491150 command_runner.go:130] > # This option supports live configuration reload.
	I0116 03:11:03.484427  491150 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0116 03:11:03.484440  491150 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0116 03:11:03.484451  491150 command_runner.go:130] > # the cgroup blockio controller.
	I0116 03:11:03.484461  491150 command_runner.go:130] > # blockio_config_file = ""
	I0116 03:11:03.484472  491150 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0116 03:11:03.484483  491150 command_runner.go:130] > # irqbalance daemon.
	I0116 03:11:03.484490  491150 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0116 03:11:03.484500  491150 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0116 03:11:03.484512  491150 command_runner.go:130] > # This option supports live configuration reload.
	I0116 03:11:03.484523  491150 command_runner.go:130] > # rdt_config_file = ""
	I0116 03:11:03.484533  491150 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0116 03:11:03.484544  491150 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0116 03:11:03.484554  491150 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0116 03:11:03.484562  491150 command_runner.go:130] > # separate_pull_cgroup = ""
	I0116 03:11:03.484575  491150 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0116 03:11:03.484588  491150 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0116 03:11:03.484595  491150 command_runner.go:130] > # will be added.
	I0116 03:11:03.484600  491150 command_runner.go:130] > # default_capabilities = [
	I0116 03:11:03.484615  491150 command_runner.go:130] > # 	"CHOWN",
	I0116 03:11:03.484626  491150 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0116 03:11:03.484633  491150 command_runner.go:130] > # 	"FSETID",
	I0116 03:11:03.484643  491150 command_runner.go:130] > # 	"FOWNER",
	I0116 03:11:03.484653  491150 command_runner.go:130] > # 	"SETGID",
	I0116 03:11:03.484660  491150 command_runner.go:130] > # 	"SETUID",
	I0116 03:11:03.484670  491150 command_runner.go:130] > # 	"SETPCAP",
	I0116 03:11:03.484677  491150 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0116 03:11:03.484687  491150 command_runner.go:130] > # 	"KILL",
	I0116 03:11:03.484692  491150 command_runner.go:130] > # ]
	I0116 03:11:03.484702  491150 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0116 03:11:03.484713  491150 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0116 03:11:03.484724  491150 command_runner.go:130] > # default_sysctls = [
	I0116 03:11:03.484730  491150 command_runner.go:130] > # ]
	I0116 03:11:03.484742  491150 command_runner.go:130] > # List of devices on the host that a
	I0116 03:11:03.484753  491150 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0116 03:11:03.484763  491150 command_runner.go:130] > # allowed_devices = [
	I0116 03:11:03.484770  491150 command_runner.go:130] > # 	"/dev/fuse",
	I0116 03:11:03.484781  491150 command_runner.go:130] > # ]
	I0116 03:11:03.484789  491150 command_runner.go:130] > # List of additional devices. specified as
	I0116 03:11:03.484806  491150 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0116 03:11:03.484818  491150 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0116 03:11:03.484846  491150 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0116 03:11:03.484857  491150 command_runner.go:130] > # additional_devices = [
	I0116 03:11:03.484863  491150 command_runner.go:130] > # ]
	I0116 03:11:03.484874  491150 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0116 03:11:03.484882  491150 command_runner.go:130] > # cdi_spec_dirs = [
	I0116 03:11:03.484887  491150 command_runner.go:130] > # 	"/etc/cdi",
	I0116 03:11:03.484896  491150 command_runner.go:130] > # 	"/var/run/cdi",
	I0116 03:11:03.484903  491150 command_runner.go:130] > # ]
	I0116 03:11:03.484917  491150 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0116 03:11:03.484931  491150 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0116 03:11:03.484941  491150 command_runner.go:130] > # Defaults to false.
	I0116 03:11:03.484950  491150 command_runner.go:130] > # device_ownership_from_security_context = false
	I0116 03:11:03.484964  491150 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0116 03:11:03.484977  491150 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0116 03:11:03.484985  491150 command_runner.go:130] > # hooks_dir = [
	I0116 03:11:03.484990  491150 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0116 03:11:03.484995  491150 command_runner.go:130] > # ]
	I0116 03:11:03.485008  491150 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0116 03:11:03.485022  491150 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0116 03:11:03.485032  491150 command_runner.go:130] > # its default mounts from the following two files:
	I0116 03:11:03.485041  491150 command_runner.go:130] > #
	I0116 03:11:03.485051  491150 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0116 03:11:03.485065  491150 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0116 03:11:03.485078  491150 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0116 03:11:03.485086  491150 command_runner.go:130] > #
	I0116 03:11:03.485093  491150 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0116 03:11:03.485106  491150 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0116 03:11:03.485120  491150 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0116 03:11:03.485134  491150 command_runner.go:130] > #      only add mounts it finds in this file.
	I0116 03:11:03.485140  491150 command_runner.go:130] > #
	I0116 03:11:03.485151  491150 command_runner.go:130] > # default_mounts_file = ""
	I0116 03:11:03.485163  491150 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0116 03:11:03.485177  491150 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0116 03:11:03.485187  491150 command_runner.go:130] > pids_limit = 1024
	I0116 03:11:03.485197  491150 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0116 03:11:03.485206  491150 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0116 03:11:03.485215  491150 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0116 03:11:03.485231  491150 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0116 03:11:03.485242  491150 command_runner.go:130] > # log_size_max = -1
	I0116 03:11:03.485253  491150 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0116 03:11:03.485263  491150 command_runner.go:130] > # log_to_journald = false
	I0116 03:11:03.485276  491150 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0116 03:11:03.485287  491150 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0116 03:11:03.485295  491150 command_runner.go:130] > # Path to directory for container attach sockets.
	I0116 03:11:03.485307  491150 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0116 03:11:03.485319  491150 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0116 03:11:03.485326  491150 command_runner.go:130] > # bind_mount_prefix = ""
	I0116 03:11:03.485336  491150 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0116 03:11:03.485346  491150 command_runner.go:130] > # read_only = false
	I0116 03:11:03.485356  491150 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0116 03:11:03.485367  491150 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0116 03:11:03.485374  491150 command_runner.go:130] > # live configuration reload.
	I0116 03:11:03.485384  491150 command_runner.go:130] > # log_level = "info"
	I0116 03:11:03.485391  491150 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0116 03:11:03.485402  491150 command_runner.go:130] > # This option supports live configuration reload.
	I0116 03:11:03.485409  491150 command_runner.go:130] > # log_filter = ""
	I0116 03:11:03.485424  491150 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0116 03:11:03.485433  491150 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0116 03:11:03.485444  491150 command_runner.go:130] > # separated by comma.
	I0116 03:11:03.485451  491150 command_runner.go:130] > # uid_mappings = ""
	I0116 03:11:03.485461  491150 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0116 03:11:03.485473  491150 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0116 03:11:03.485484  491150 command_runner.go:130] > # separated by comma.
	I0116 03:11:03.485491  491150 command_runner.go:130] > # gid_mappings = ""
	I0116 03:11:03.485505  491150 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0116 03:11:03.485516  491150 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0116 03:11:03.485525  491150 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0116 03:11:03.485532  491150 command_runner.go:130] > # minimum_mappable_uid = -1
	I0116 03:11:03.485543  491150 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0116 03:11:03.485557  491150 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0116 03:11:03.485569  491150 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0116 03:11:03.485579  491150 command_runner.go:130] > # minimum_mappable_gid = -1
	I0116 03:11:03.485592  491150 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0116 03:11:03.485610  491150 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0116 03:11:03.485623  491150 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0116 03:11:03.485628  491150 command_runner.go:130] > # ctr_stop_timeout = 30
	I0116 03:11:03.485635  491150 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0116 03:11:03.485645  491150 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0116 03:11:03.485659  491150 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0116 03:11:03.485668  491150 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0116 03:11:03.485681  491150 command_runner.go:130] > drop_infra_ctr = false
	I0116 03:11:03.485695  491150 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0116 03:11:03.485707  491150 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0116 03:11:03.485722  491150 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0116 03:11:03.485729  491150 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0116 03:11:03.485737  491150 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0116 03:11:03.485749  491150 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0116 03:11:03.485758  491150 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0116 03:11:03.485773  491150 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0116 03:11:03.485784  491150 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0116 03:11:03.485797  491150 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0116 03:11:03.485810  491150 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0116 03:11:03.485819  491150 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0116 03:11:03.485829  491150 command_runner.go:130] > # default_runtime = "runc"
	I0116 03:11:03.485842  491150 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0116 03:11:03.485858  491150 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0116 03:11:03.485875  491150 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0116 03:11:03.485890  491150 command_runner.go:130] > # creation as a file is not desired either.
	I0116 03:11:03.485906  491150 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0116 03:11:03.485914  491150 command_runner.go:130] > # the hostname is being managed dynamically.
	I0116 03:11:03.485924  491150 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0116 03:11:03.485933  491150 command_runner.go:130] > # ]
	I0116 03:11:03.485945  491150 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0116 03:11:03.485960  491150 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0116 03:11:03.485973  491150 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0116 03:11:03.485987  491150 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0116 03:11:03.485996  491150 command_runner.go:130] > #
	I0116 03:11:03.486004  491150 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0116 03:11:03.486009  491150 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0116 03:11:03.486016  491150 command_runner.go:130] > #  runtime_type = "oci"
	I0116 03:11:03.486027  491150 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0116 03:11:03.486038  491150 command_runner.go:130] > #  privileged_without_host_devices = false
	I0116 03:11:03.486049  491150 command_runner.go:130] > #  allowed_annotations = []
	I0116 03:11:03.486058  491150 command_runner.go:130] > # Where:
	I0116 03:11:03.486070  491150 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0116 03:11:03.486083  491150 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0116 03:11:03.486093  491150 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0116 03:11:03.486106  491150 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0116 03:11:03.486116  491150 command_runner.go:130] > #   in $PATH.
	I0116 03:11:03.486130  491150 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0116 03:11:03.486142  491150 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0116 03:11:03.486155  491150 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0116 03:11:03.486164  491150 command_runner.go:130] > #   state.
	I0116 03:11:03.486177  491150 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0116 03:11:03.486186  491150 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0116 03:11:03.486199  491150 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0116 03:11:03.486213  491150 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0116 03:11:03.486227  491150 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0116 03:11:03.486242  491150 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0116 03:11:03.486253  491150 command_runner.go:130] > #   The currently recognized values are:
	I0116 03:11:03.486267  491150 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0116 03:11:03.486279  491150 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0116 03:11:03.486292  491150 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0116 03:11:03.486307  491150 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0116 03:11:03.486323  491150 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0116 03:11:03.486338  491150 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0116 03:11:03.486351  491150 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0116 03:11:03.486364  491150 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0116 03:11:03.486371  491150 command_runner.go:130] > #   should be moved to the container's cgroup
	I0116 03:11:03.486384  491150 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0116 03:11:03.486396  491150 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0116 03:11:03.486407  491150 command_runner.go:130] > runtime_type = "oci"
	I0116 03:11:03.486418  491150 command_runner.go:130] > runtime_root = "/run/runc"
	I0116 03:11:03.486428  491150 command_runner.go:130] > runtime_config_path = ""
	I0116 03:11:03.486438  491150 command_runner.go:130] > monitor_path = ""
	I0116 03:11:03.486448  491150 command_runner.go:130] > monitor_cgroup = ""
	I0116 03:11:03.486458  491150 command_runner.go:130] > monitor_exec_cgroup = ""
	I0116 03:11:03.486468  491150 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0116 03:11:03.486477  491150 command_runner.go:130] > # running containers
	I0116 03:11:03.486489  491150 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0116 03:11:03.486503  491150 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0116 03:11:03.486580  491150 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0116 03:11:03.486598  491150 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0116 03:11:03.486612  491150 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0116 03:11:03.486621  491150 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0116 03:11:03.486635  491150 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0116 03:11:03.486643  491150 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0116 03:11:03.486651  491150 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0116 03:11:03.486662  491150 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0116 03:11:03.486675  491150 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0116 03:11:03.486687  491150 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0116 03:11:03.486701  491150 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0116 03:11:03.486721  491150 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0116 03:11:03.486734  491150 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0116 03:11:03.486744  491150 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0116 03:11:03.486760  491150 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0116 03:11:03.486771  491150 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0116 03:11:03.486783  491150 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0116 03:11:03.486798  491150 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0116 03:11:03.486811  491150 command_runner.go:130] > # Example:
	I0116 03:11:03.486823  491150 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0116 03:11:03.486834  491150 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0116 03:11:03.486846  491150 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0116 03:11:03.486858  491150 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0116 03:11:03.486865  491150 command_runner.go:130] > # cpuset = 0
	I0116 03:11:03.486871  491150 command_runner.go:130] > # cpushares = "0-1"
	I0116 03:11:03.486880  491150 command_runner.go:130] > # Where:
	I0116 03:11:03.486892  491150 command_runner.go:130] > # The workload name is workload-type.
	I0116 03:11:03.486904  491150 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0116 03:11:03.486917  491150 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0116 03:11:03.486929  491150 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0116 03:11:03.486947  491150 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0116 03:11:03.486957  491150 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0116 03:11:03.486963  491150 command_runner.go:130] > # 
	I0116 03:11:03.486971  491150 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0116 03:11:03.486981  491150 command_runner.go:130] > #
	I0116 03:11:03.486994  491150 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0116 03:11:03.487009  491150 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0116 03:11:03.487022  491150 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0116 03:11:03.487036  491150 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0116 03:11:03.487049  491150 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0116 03:11:03.487058  491150 command_runner.go:130] > [crio.image]
	I0116 03:11:03.487067  491150 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0116 03:11:03.487077  491150 command_runner.go:130] > # default_transport = "docker://"
	I0116 03:11:03.487091  491150 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0116 03:11:03.487106  491150 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0116 03:11:03.487116  491150 command_runner.go:130] > # global_auth_file = ""
	I0116 03:11:03.487128  491150 command_runner.go:130] > # The image used to instantiate infra containers.
	I0116 03:11:03.487139  491150 command_runner.go:130] > # This option supports live configuration reload.
	I0116 03:11:03.487150  491150 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0116 03:11:03.487160  491150 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0116 03:11:03.487172  491150 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0116 03:11:03.487184  491150 command_runner.go:130] > # This option supports live configuration reload.
	I0116 03:11:03.487195  491150 command_runner.go:130] > # pause_image_auth_file = ""
	I0116 03:11:03.487208  491150 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0116 03:11:03.487222  491150 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0116 03:11:03.487235  491150 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0116 03:11:03.487247  491150 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0116 03:11:03.487255  491150 command_runner.go:130] > # pause_command = "/pause"
	I0116 03:11:03.487265  491150 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0116 03:11:03.487279  491150 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0116 03:11:03.487296  491150 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0116 03:11:03.487310  491150 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0116 03:11:03.487321  491150 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0116 03:11:03.487332  491150 command_runner.go:130] > # signature_policy = ""
	I0116 03:11:03.487344  491150 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0116 03:11:03.487353  491150 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0116 03:11:03.487363  491150 command_runner.go:130] > # changing them here.
	I0116 03:11:03.487374  491150 command_runner.go:130] > # insecure_registries = [
	I0116 03:11:03.487383  491150 command_runner.go:130] > # ]
	I0116 03:11:03.487400  491150 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0116 03:11:03.487412  491150 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0116 03:11:03.487422  491150 command_runner.go:130] > # image_volumes = "mkdir"
	I0116 03:11:03.487433  491150 command_runner.go:130] > # Temporary directory to use for storing big files
	I0116 03:11:03.487440  491150 command_runner.go:130] > # big_files_temporary_dir = ""
	I0116 03:11:03.487450  491150 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0116 03:11:03.487461  491150 command_runner.go:130] > # CNI plugins.
	I0116 03:11:03.487468  491150 command_runner.go:130] > [crio.network]
	I0116 03:11:03.487481  491150 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0116 03:11:03.487494  491150 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0116 03:11:03.487504  491150 command_runner.go:130] > # cni_default_network = ""
	I0116 03:11:03.487517  491150 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0116 03:11:03.487527  491150 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0116 03:11:03.487533  491150 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0116 03:11:03.487537  491150 command_runner.go:130] > # plugin_dirs = [
	I0116 03:11:03.487544  491150 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0116 03:11:03.487554  491150 command_runner.go:130] > # ]
	I0116 03:11:03.487564  491150 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0116 03:11:03.487574  491150 command_runner.go:130] > [crio.metrics]
	I0116 03:11:03.487584  491150 command_runner.go:130] > # Globally enable or disable metrics support.
	I0116 03:11:03.487593  491150 command_runner.go:130] > enable_metrics = true
	I0116 03:11:03.487610  491150 command_runner.go:130] > # Specify enabled metrics collectors.
	I0116 03:11:03.487621  491150 command_runner.go:130] > # Per default all metrics are enabled.
	I0116 03:11:03.487630  491150 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0116 03:11:03.487643  491150 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0116 03:11:03.487656  491150 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0116 03:11:03.487667  491150 command_runner.go:130] > # metrics_collectors = [
	I0116 03:11:03.487677  491150 command_runner.go:130] > # 	"operations",
	I0116 03:11:03.487689  491150 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0116 03:11:03.487700  491150 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0116 03:11:03.487710  491150 command_runner.go:130] > # 	"operations_errors",
	I0116 03:11:03.487720  491150 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0116 03:11:03.487727  491150 command_runner.go:130] > # 	"image_pulls_by_name",
	I0116 03:11:03.487734  491150 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0116 03:11:03.487744  491150 command_runner.go:130] > # 	"image_pulls_failures",
	I0116 03:11:03.487752  491150 command_runner.go:130] > # 	"image_pulls_successes",
	I0116 03:11:03.487762  491150 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0116 03:11:03.487770  491150 command_runner.go:130] > # 	"image_layer_reuse",
	I0116 03:11:03.487781  491150 command_runner.go:130] > # 	"containers_oom_total",
	I0116 03:11:03.487788  491150 command_runner.go:130] > # 	"containers_oom",
	I0116 03:11:03.487798  491150 command_runner.go:130] > # 	"processes_defunct",
	I0116 03:11:03.487805  491150 command_runner.go:130] > # 	"operations_total",
	I0116 03:11:03.487813  491150 command_runner.go:130] > # 	"operations_latency_seconds",
	I0116 03:11:03.487819  491150 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0116 03:11:03.487830  491150 command_runner.go:130] > # 	"operations_errors_total",
	I0116 03:11:03.487838  491150 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0116 03:11:03.487849  491150 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0116 03:11:03.487857  491150 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0116 03:11:03.487867  491150 command_runner.go:130] > # 	"image_pulls_success_total",
	I0116 03:11:03.487875  491150 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0116 03:11:03.487885  491150 command_runner.go:130] > # 	"containers_oom_count_total",
	I0116 03:11:03.487891  491150 command_runner.go:130] > # ]
	I0116 03:11:03.487900  491150 command_runner.go:130] > # The port on which the metrics server will listen.
	I0116 03:11:03.487905  491150 command_runner.go:130] > # metrics_port = 9090
	I0116 03:11:03.487917  491150 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0116 03:11:03.487925  491150 command_runner.go:130] > # metrics_socket = ""
	I0116 03:11:03.487936  491150 command_runner.go:130] > # The certificate for the secure metrics server.
	I0116 03:11:03.487951  491150 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0116 03:11:03.487965  491150 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0116 03:11:03.487976  491150 command_runner.go:130] > # certificate on any modification event.
	I0116 03:11:03.487983  491150 command_runner.go:130] > # metrics_cert = ""
	I0116 03:11:03.487989  491150 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0116 03:11:03.488001  491150 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0116 03:11:03.488011  491150 command_runner.go:130] > # metrics_key = ""
	I0116 03:11:03.488021  491150 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0116 03:11:03.488031  491150 command_runner.go:130] > [crio.tracing]
	I0116 03:11:03.488051  491150 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0116 03:11:03.488058  491150 command_runner.go:130] > # enable_tracing = false
	I0116 03:11:03.488071  491150 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0116 03:11:03.488079  491150 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0116 03:11:03.488093  491150 command_runner.go:130] > # Number of samples to collect per million spans.
	I0116 03:11:03.488104  491150 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0116 03:11:03.488115  491150 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0116 03:11:03.488125  491150 command_runner.go:130] > [crio.stats]
	I0116 03:11:03.488138  491150 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0116 03:11:03.488150  491150 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0116 03:11:03.488161  491150 command_runner.go:130] > # stats_collection_period = 0
	I0116 03:11:03.488204  491150 command_runner.go:130] ! time="2024-01-16 03:11:03.473591259Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0116 03:11:03.488226  491150 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0116 03:11:03.488312  491150 cni.go:84] Creating CNI manager for ""
	I0116 03:11:03.488327  491150 cni.go:136] 3 nodes found, recommending kindnet
	I0116 03:11:03.488341  491150 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 03:11:03.488371  491150 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.182 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-405494 NodeName:multinode-405494-m03 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.70"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.182 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0116 03:11:03.488526  491150 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.182
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-405494-m03"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.182
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.70"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 03:11:03.488604  491150 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-405494-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.182
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-405494 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0116 03:11:03.488676  491150 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0116 03:11:03.499331  491150 command_runner.go:130] > kubeadm
	I0116 03:11:03.499354  491150 command_runner.go:130] > kubectl
	I0116 03:11:03.499360  491150 command_runner.go:130] > kubelet
	I0116 03:11:03.499383  491150 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 03:11:03.499450  491150 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0116 03:11:03.509013  491150 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0116 03:11:03.525846  491150 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0116 03:11:03.545202  491150 ssh_runner.go:195] Run: grep 192.168.39.70	control-plane.minikube.internal$ /etc/hosts
	I0116 03:11:03.549196  491150 command_runner.go:130] > 192.168.39.70	control-plane.minikube.internal
	I0116 03:11:03.549292  491150 host.go:66] Checking if "multinode-405494" exists ...
	I0116 03:11:03.549573  491150 config.go:182] Loaded profile config "multinode-405494": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 03:11:03.549642  491150 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:11:03.549699  491150 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:11:03.565111  491150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40881
	I0116 03:11:03.565595  491150 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:11:03.566097  491150 main.go:141] libmachine: Using API Version  1
	I0116 03:11:03.566122  491150 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:11:03.566443  491150 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:11:03.566670  491150 main.go:141] libmachine: (multinode-405494) Calling .DriverName
	I0116 03:11:03.566818  491150 start.go:304] JoinCluster: &{Name:multinode-405494 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.4 ClusterName:multinode-405494 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.70 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.32 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.182 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 03:11:03.566949  491150 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0116 03:11:03.566974  491150 main.go:141] libmachine: (multinode-405494) Calling .GetSSHHostname
	I0116 03:11:03.569775  491150 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 03:11:03.570199  491150 main.go:141] libmachine: (multinode-405494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:49:7b", ip: ""} in network mk-multinode-405494: {Iface:virbr1 ExpiryTime:2024-01-16 04:06:53 +0000 UTC Type:0 Mac:52:54:00:b0:49:7b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:multinode-405494 Clientid:01:52:54:00:b0:49:7b}
	I0116 03:11:03.570235  491150 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined IP address 192.168.39.70 and MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 03:11:03.570385  491150 main.go:141] libmachine: (multinode-405494) Calling .GetSSHPort
	I0116 03:11:03.570563  491150 main.go:141] libmachine: (multinode-405494) Calling .GetSSHKeyPath
	I0116 03:11:03.570720  491150 main.go:141] libmachine: (multinode-405494) Calling .GetSSHUsername
	I0116 03:11:03.570839  491150 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/multinode-405494/id_rsa Username:docker}
	I0116 03:11:03.753862  491150 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 4qhq9n.060ob8bj879imwy0 --discovery-token-ca-cert-hash sha256:ab75761e1119a1709a216b682cd7b1dcce0641115df5f236b54b16e4f66aa044 
	I0116 03:11:03.753941  491150 start.go:317] removing existing worker node "m03" before attempting to rejoin cluster: &{Name:m03 IP:192.168.39.182 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0116 03:11:03.753989  491150 host.go:66] Checking if "multinode-405494" exists ...
	I0116 03:11:03.754423  491150 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:11:03.754483  491150 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:11:03.773115  491150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41447
	I0116 03:11:03.773620  491150 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:11:03.774068  491150 main.go:141] libmachine: Using API Version  1
	I0116 03:11:03.774090  491150 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:11:03.774577  491150 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:11:03.774832  491150 main.go:141] libmachine: (multinode-405494) Calling .DriverName
	I0116 03:11:03.775054  491150 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-405494-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I0116 03:11:03.775086  491150 main.go:141] libmachine: (multinode-405494) Calling .GetSSHHostname
	I0116 03:11:03.778882  491150 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 03:11:03.779229  491150 main.go:141] libmachine: (multinode-405494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:49:7b", ip: ""} in network mk-multinode-405494: {Iface:virbr1 ExpiryTime:2024-01-16 04:06:53 +0000 UTC Type:0 Mac:52:54:00:b0:49:7b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:multinode-405494 Clientid:01:52:54:00:b0:49:7b}
	I0116 03:11:03.779261  491150 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined IP address 192.168.39.70 and MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 03:11:03.779384  491150 main.go:141] libmachine: (multinode-405494) Calling .GetSSHPort
	I0116 03:11:03.779595  491150 main.go:141] libmachine: (multinode-405494) Calling .GetSSHKeyPath
	I0116 03:11:03.779768  491150 main.go:141] libmachine: (multinode-405494) Calling .GetSSHUsername
	I0116 03:11:03.779923  491150 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/multinode-405494/id_rsa Username:docker}
	I0116 03:11:03.940255  491150 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I0116 03:11:03.998570  491150 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-6zhtt, kube-system/kube-proxy-ghscp
	I0116 03:11:06.019949  491150 command_runner.go:130] > node/multinode-405494-m03 cordoned
	I0116 03:11:06.019985  491150 command_runner.go:130] > pod "busybox-5bc68d56bd-ltn29" has DeletionTimestamp older than 1 seconds, skipping
	I0116 03:11:06.019992  491150 command_runner.go:130] > node/multinode-405494-m03 drained
	I0116 03:11:06.020020  491150 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-405494-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (2.244938635s)
	I0116 03:11:06.020055  491150 node.go:108] successfully drained node "m03"
	I0116 03:11:06.020542  491150 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17965-468241/kubeconfig
	I0116 03:11:06.020896  491150 kapi.go:59] client config for multinode-405494: &rest.Config{Host:"https://192.168.39.70:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17965-468241/.minikube/profiles/multinode-405494/client.crt", KeyFile:"/home/jenkins/minikube-integration/17965-468241/.minikube/profiles/multinode-405494/client.key", CAFile:"/home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19be0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0116 03:11:06.021328  491150 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0116 03:11:06.021386  491150 round_trippers.go:463] DELETE https://192.168.39.70:8443/api/v1/nodes/multinode-405494-m03
	I0116 03:11:06.021395  491150 round_trippers.go:469] Request Headers:
	I0116 03:11:06.021403  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:11:06.021409  491150 round_trippers.go:473]     Content-Type: application/json
	I0116 03:11:06.021417  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:11:06.034662  491150 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0116 03:11:06.034689  491150 round_trippers.go:577] Response Headers:
	I0116 03:11:06.034700  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:11:06.034709  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:11:06.034720  491150 round_trippers.go:580]     Content-Length: 171
	I0116 03:11:06.034730  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:11:06 GMT
	I0116 03:11:06.034739  491150 round_trippers.go:580]     Audit-Id: f8bbc837-ddaa-4519-9c47-ddc33ce25087
	I0116 03:11:06.034747  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:11:06.034761  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:11:06.034895  491150 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-405494-m03","kind":"nodes","uid":"f017bb37-2198-45f8-8920-a0a10585c3e0"}}
	I0116 03:11:06.034952  491150 node.go:124] successfully deleted node "m03"
	I0116 03:11:06.034968  491150 start.go:321] successfully removed existing worker node "m03" from cluster: &{Name:m03 IP:192.168.39.182 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0116 03:11:06.034999  491150 start.go:325] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.39.182 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0116 03:11:06.035025  491150 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 4qhq9n.060ob8bj879imwy0 --discovery-token-ca-cert-hash sha256:ab75761e1119a1709a216b682cd7b1dcce0641115df5f236b54b16e4f66aa044 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-405494-m03"
	I0116 03:11:06.104244  491150 command_runner.go:130] ! W0116 03:11:06.098609    2388 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0116 03:11:06.104317  491150 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0116 03:11:06.280978  491150 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0116 03:11:06.281026  491150 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0116 03:11:07.091567  491150 command_runner.go:130] > [preflight] Running pre-flight checks
	I0116 03:11:07.091604  491150 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0116 03:11:07.091619  491150 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0116 03:11:07.091631  491150 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0116 03:11:07.091642  491150 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0116 03:11:07.091650  491150 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0116 03:11:07.091661  491150 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0116 03:11:07.091677  491150 command_runner.go:130] > This node has joined the cluster:
	I0116 03:11:07.091687  491150 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0116 03:11:07.091701  491150 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0116 03:11:07.091714  491150 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0116 03:11:07.091743  491150 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 4qhq9n.060ob8bj879imwy0 --discovery-token-ca-cert-hash sha256:ab75761e1119a1709a216b682cd7b1dcce0641115df5f236b54b16e4f66aa044 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-405494-m03": (1.056699579s)
	I0116 03:11:07.091810  491150 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0116 03:11:07.379278  491150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=6e8fa5f64d0e7272be43ff25ed3826261f0a2578 minikube.k8s.io/name=multinode-405494 minikube.k8s.io/updated_at=2024_01_16T03_11_07_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:11:07.500127  491150 command_runner.go:130] > node/multinode-405494-m02 labeled
	I0116 03:11:07.500172  491150 command_runner.go:130] > node/multinode-405494-m03 labeled
	I0116 03:11:07.500199  491150 start.go:306] JoinCluster complete in 3.933381523s
	I0116 03:11:07.500216  491150 cni.go:84] Creating CNI manager for ""
	I0116 03:11:07.500224  491150 cni.go:136] 3 nodes found, recommending kindnet
	I0116 03:11:07.500287  491150 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0116 03:11:07.508787  491150 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0116 03:11:07.508827  491150 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0116 03:11:07.508843  491150 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0116 03:11:07.508853  491150 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0116 03:11:07.508861  491150 command_runner.go:130] > Access: 2024-01-16 03:06:53.963563216 +0000
	I0116 03:11:07.508869  491150 command_runner.go:130] > Modify: 2023-12-28 22:53:36.000000000 +0000
	I0116 03:11:07.508878  491150 command_runner.go:130] > Change: 2024-01-16 03:06:52.021563216 +0000
	I0116 03:11:07.508894  491150 command_runner.go:130] >  Birth: -
	I0116 03:11:07.508990  491150 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0116 03:11:07.509011  491150 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0116 03:11:07.528944  491150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0116 03:11:07.885568  491150 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0116 03:11:07.890684  491150 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0116 03:11:07.893254  491150 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0116 03:11:07.904251  491150 command_runner.go:130] > daemonset.apps/kindnet configured
	I0116 03:11:07.907625  491150 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17965-468241/kubeconfig
	I0116 03:11:07.907878  491150 kapi.go:59] client config for multinode-405494: &rest.Config{Host:"https://192.168.39.70:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17965-468241/.minikube/profiles/multinode-405494/client.crt", KeyFile:"/home/jenkins/minikube-integration/17965-468241/.minikube/profiles/multinode-405494/client.key", CAFile:"/home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19be0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0116 03:11:07.908282  491150 round_trippers.go:463] GET https://192.168.39.70:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0116 03:11:07.908301  491150 round_trippers.go:469] Request Headers:
	I0116 03:11:07.908312  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:11:07.908320  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:11:07.911096  491150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:11:07.911119  491150 round_trippers.go:577] Response Headers:
	I0116 03:11:07.911128  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:11:07.911139  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:11:07.911147  491150 round_trippers.go:580]     Content-Length: 291
	I0116 03:11:07.911155  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:11:07 GMT
	I0116 03:11:07.911165  491150 round_trippers.go:580]     Audit-Id: dc5c62b3-fc54-432d-9d52-444b03ea92fc
	I0116 03:11:07.911174  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:11:07.911186  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:11:07.911214  491150 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"dd77c785-c90f-4789-97cb-f593b7a7a7e2","resourceVersion":"896","creationTimestamp":"2024-01-16T02:57:11Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0116 03:11:07.911328  491150 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-405494" context rescaled to 1 replicas
	I0116 03:11:07.911364  491150 start.go:223] Will wait 6m0s for node &{Name:m03 IP:192.168.39.182 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0116 03:11:07.913683  491150 out.go:177] * Verifying Kubernetes components...
	I0116 03:11:07.915302  491150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:11:07.930767  491150 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17965-468241/kubeconfig
	I0116 03:11:07.931114  491150 kapi.go:59] client config for multinode-405494: &rest.Config{Host:"https://192.168.39.70:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17965-468241/.minikube/profiles/multinode-405494/client.crt", KeyFile:"/home/jenkins/minikube-integration/17965-468241/.minikube/profiles/multinode-405494/client.key", CAFile:"/home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19be0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0116 03:11:07.931390  491150 node_ready.go:35] waiting up to 6m0s for node "multinode-405494-m03" to be "Ready" ...
	I0116 03:11:07.931477  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494-m03
	I0116 03:11:07.931486  491150 round_trippers.go:469] Request Headers:
	I0116 03:11:07.931494  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:11:07.931503  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:11:07.934475  491150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:11:07.934504  491150 round_trippers.go:577] Response Headers:
	I0116 03:11:07.934513  491150 round_trippers.go:580]     Audit-Id: fab2a849-c0e1-4bae-9452-b4c36a2db68d
	I0116 03:11:07.934521  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:11:07.934528  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:11:07.934536  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:11:07.934547  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:11:07.934560  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:11:07 GMT
	I0116 03:11:07.934769  491150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494-m03","uid":"1f238593-3a8c-44ce-8bab-97495d17a848","resourceVersion":"1227","creationTimestamp":"2024-01-16T03:11:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_11_07_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:11:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 4223 chars]
	I0116 03:11:07.935044  491150 node_ready.go:49] node "multinode-405494-m03" has status "Ready":"True"
	I0116 03:11:07.935060  491150 node_ready.go:38] duration metric: took 3.653197ms waiting for node "multinode-405494-m03" to be "Ready" ...
	I0116 03:11:07.935068  491150 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:11:07.935135  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods
	I0116 03:11:07.935142  491150 round_trippers.go:469] Request Headers:
	I0116 03:11:07.935149  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:11:07.935157  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:11:07.941498  491150 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0116 03:11:07.941522  491150 round_trippers.go:577] Response Headers:
	I0116 03:11:07.941530  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:11:07.941540  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:11:07.941545  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:11:07 GMT
	I0116 03:11:07.941551  491150 round_trippers.go:580]     Audit-Id: 0a804abf-2a82-4553-8769-07245f9a1bed
	I0116 03:11:07.941557  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:11:07.941584  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:11:07.942412  491150 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1234"},"items":[{"metadata":{"name":"coredns-5dd5756b68-vwqvk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"096151e2-c59c-4dcf-bd29-2029901902c9","resourceVersion":"892","creationTimestamp":"2024-01-16T02:57:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c3967b3a-de7c-4ccd-af3a-dd2a9e8b71f8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c3967b3a-de7c-4ccd-af3a-dd2a9e8b71f8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 82039 chars]
	I0116 03:11:07.944984  491150 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-vwqvk" in "kube-system" namespace to be "Ready" ...
	I0116 03:11:07.945078  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-vwqvk
	I0116 03:11:07.945089  491150 round_trippers.go:469] Request Headers:
	I0116 03:11:07.945096  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:11:07.945105  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:11:07.948070  491150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:11:07.948090  491150 round_trippers.go:577] Response Headers:
	I0116 03:11:07.948097  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:11:07.948103  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:11:07.948108  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:11:07.948114  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:11:07.948120  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:11:07 GMT
	I0116 03:11:07.948127  491150 round_trippers.go:580]     Audit-Id: d2747912-aed4-4e93-854f-fb864aae878d
	I0116 03:11:07.948458  491150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-vwqvk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"096151e2-c59c-4dcf-bd29-2029901902c9","resourceVersion":"892","creationTimestamp":"2024-01-16T02:57:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c3967b3a-de7c-4ccd-af3a-dd2a9e8b71f8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c3967b3a-de7c-4ccd-af3a-dd2a9e8b71f8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6264 chars]
	I0116 03:11:07.948961  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494
	I0116 03:11:07.948979  491150 round_trippers.go:469] Request Headers:
	I0116 03:11:07.948990  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:11:07.949003  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:11:07.951399  491150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:11:07.951415  491150 round_trippers.go:577] Response Headers:
	I0116 03:11:07.951421  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:11:07.951426  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:11:07.951431  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:11:07.951442  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:11:07.951454  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:11:07 GMT
	I0116 03:11:07.951467  491150 round_trippers.go:580]     Audit-Id: f1f06aeb-2708-4acb-8be6-9404ac5539c1
	I0116 03:11:07.951622  491150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","resourceVersion":"915","creationTimestamp":"2024-01-16T02:57:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_57_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:57:07Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I0116 03:11:07.951954  491150 pod_ready.go:92] pod "coredns-5dd5756b68-vwqvk" in "kube-system" namespace has status "Ready":"True"
	I0116 03:11:07.951970  491150 pod_ready.go:81] duration metric: took 6.964018ms waiting for pod "coredns-5dd5756b68-vwqvk" in "kube-system" namespace to be "Ready" ...
	I0116 03:11:07.951980  491150 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-405494" in "kube-system" namespace to be "Ready" ...
	I0116 03:11:07.952048  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-405494
	I0116 03:11:07.952056  491150 round_trippers.go:469] Request Headers:
	I0116 03:11:07.952067  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:11:07.952084  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:11:07.954548  491150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:11:07.954565  491150 round_trippers.go:577] Response Headers:
	I0116 03:11:07.954575  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:11:07 GMT
	I0116 03:11:07.954583  491150 round_trippers.go:580]     Audit-Id: ce43956c-5740-476f-bd29-9389c82fb037
	I0116 03:11:07.954589  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:11:07.954596  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:11:07.954604  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:11:07.954612  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:11:07.954760  491150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-405494","namespace":"kube-system","uid":"3f839da7-c0c0-4546-8848-1557cbf50722","resourceVersion":"866","creationTimestamp":"2024-01-16T02:57:12Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.70:2379","kubernetes.io/config.hash":"3a9e67d0e87fe64d9531234ab850034d","kubernetes.io/config.mirror":"3a9e67d0e87fe64d9531234ab850034d","kubernetes.io/config.seen":"2024-01-16T02:57:11.711592151Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5843 chars]
	I0116 03:11:07.955084  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494
	I0116 03:11:07.955094  491150 round_trippers.go:469] Request Headers:
	I0116 03:11:07.955101  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:11:07.955107  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:11:07.959186  491150 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 03:11:07.959205  491150 round_trippers.go:577] Response Headers:
	I0116 03:11:07.959212  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:11:07 GMT
	I0116 03:11:07.959218  491150 round_trippers.go:580]     Audit-Id: 9c8c485e-1227-423f-88c6-78d9e22b007f
	I0116 03:11:07.959223  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:11:07.959228  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:11:07.959233  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:11:07.959242  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:11:07.959819  491150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","resourceVersion":"915","creationTimestamp":"2024-01-16T02:57:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_57_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:57:07Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I0116 03:11:07.960189  491150 pod_ready.go:92] pod "etcd-multinode-405494" in "kube-system" namespace has status "Ready":"True"
	I0116 03:11:07.960210  491150 pod_ready.go:81] duration metric: took 8.220247ms waiting for pod "etcd-multinode-405494" in "kube-system" namespace to be "Ready" ...
	I0116 03:11:07.960236  491150 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-405494" in "kube-system" namespace to be "Ready" ...
	I0116 03:11:07.960313  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-405494
	I0116 03:11:07.960322  491150 round_trippers.go:469] Request Headers:
	I0116 03:11:07.960333  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:11:07.960347  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:11:07.962916  491150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:11:07.962930  491150 round_trippers.go:577] Response Headers:
	I0116 03:11:07.962936  491150 round_trippers.go:580]     Audit-Id: 6d5903c4-8f8d-484a-9d49-8ed444870c4c
	I0116 03:11:07.962942  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:11:07.962947  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:11:07.962952  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:11:07.962957  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:11:07.962963  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:11:07 GMT
	I0116 03:11:07.963257  491150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-405494","namespace":"kube-system","uid":"e242d3cf-6cf7-4b47-8d3e-a83e484554a1","resourceVersion":"882","creationTimestamp":"2024-01-16T02:57:09Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.70:8443","kubernetes.io/config.hash":"04bffd1a6d3ee0aae068c41e37830c9b","kubernetes.io/config.mirror":"04bffd1a6d3ee0aae068c41e37830c9b","kubernetes.io/config.seen":"2024-01-16T02:57:02.078602539Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7380 chars]
	I0116 03:11:07.963645  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494
	I0116 03:11:07.963660  491150 round_trippers.go:469] Request Headers:
	I0116 03:11:07.963669  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:11:07.963678  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:11:07.966426  491150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:11:07.966447  491150 round_trippers.go:577] Response Headers:
	I0116 03:11:07.966458  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:11:07.966466  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:11:07.966473  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:11:07 GMT
	I0116 03:11:07.966480  491150 round_trippers.go:580]     Audit-Id: e11fc6b2-cdb0-407d-a834-d7536f0fed75
	I0116 03:11:07.966487  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:11:07.966494  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:11:07.966598  491150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","resourceVersion":"915","creationTimestamp":"2024-01-16T02:57:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_57_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:57:07Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I0116 03:11:07.966998  491150 pod_ready.go:92] pod "kube-apiserver-multinode-405494" in "kube-system" namespace has status "Ready":"True"
	I0116 03:11:07.967025  491150 pod_ready.go:81] duration metric: took 6.77624ms waiting for pod "kube-apiserver-multinode-405494" in "kube-system" namespace to be "Ready" ...
	I0116 03:11:07.967039  491150 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-405494" in "kube-system" namespace to be "Ready" ...
	I0116 03:11:07.967127  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-405494
	I0116 03:11:07.967138  491150 round_trippers.go:469] Request Headers:
	I0116 03:11:07.967147  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:11:07.967156  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:11:07.971788  491150 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 03:11:07.971809  491150 round_trippers.go:577] Response Headers:
	I0116 03:11:07.971819  491150 round_trippers.go:580]     Audit-Id: 762afce7-9d82-4646-a6ff-e66eeb9ac0d8
	I0116 03:11:07.971827  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:11:07.971834  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:11:07.971841  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:11:07.971848  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:11:07.971856  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:11:07 GMT
	I0116 03:11:07.971989  491150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-405494","namespace":"kube-system","uid":"0833b412-8909-4660-8e16-19701683358e","resourceVersion":"880","creationTimestamp":"2024-01-16T02:57:12Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"9eb78063d6e219f3cc5940494bdab4b2","kubernetes.io/config.mirror":"9eb78063d6e219f3cc5940494bdab4b2","kubernetes.io/config.seen":"2024-01-16T02:57:11.711589408Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6950 chars]
	I0116 03:11:07.972602  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494
	I0116 03:11:07.972621  491150 round_trippers.go:469] Request Headers:
	I0116 03:11:07.972632  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:11:07.972640  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:11:07.975812  491150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:11:07.975840  491150 round_trippers.go:577] Response Headers:
	I0116 03:11:07.975851  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:11:07.975860  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:11:07.975867  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:11:07 GMT
	I0116 03:11:07.975874  491150 round_trippers.go:580]     Audit-Id: 4b92252d-9fe8-43e5-bead-ed20e9a02fcf
	I0116 03:11:07.975882  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:11:07.975890  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:11:07.976264  491150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","resourceVersion":"915","creationTimestamp":"2024-01-16T02:57:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_57_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:57:07Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I0116 03:11:07.976693  491150 pod_ready.go:92] pod "kube-controller-manager-multinode-405494" in "kube-system" namespace has status "Ready":"True"
	I0116 03:11:07.976718  491150 pod_ready.go:81] duration metric: took 9.665051ms waiting for pod "kube-controller-manager-multinode-405494" in "kube-system" namespace to be "Ready" ...
	I0116 03:11:07.976732  491150 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gg8kv" in "kube-system" namespace to be "Ready" ...
	I0116 03:11:08.132178  491150 request.go:629] Waited for 155.338629ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gg8kv
	I0116 03:11:08.132271  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gg8kv
	I0116 03:11:08.132279  491150 round_trippers.go:469] Request Headers:
	I0116 03:11:08.132290  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:11:08.132300  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:11:08.135635  491150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:11:08.135656  491150 round_trippers.go:577] Response Headers:
	I0116 03:11:08.135664  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:11:08.135669  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:11:08.135674  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:11:08.135679  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:11:08.135685  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:11:08 GMT
	I0116 03:11:08.135690  491150 round_trippers.go:580]     Audit-Id: d82f76f8-46d5-47b5-a700-58135a2162e8
	I0116 03:11:08.136153  491150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-gg8kv","generateName":"kube-proxy-","namespace":"kube-system","uid":"32841b88-1b06-46ed-b4ce-f73301ec0a85","resourceVersion":"838","creationTimestamp":"2024-01-16T02:57:23Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"fdad84fd-2af6-4360-a899-0f70f257935c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fdad84fd-2af6-4360-a899-0f70f257935c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5514 chars]
	I0116 03:11:08.332128  491150 request.go:629] Waited for 195.517067ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/nodes/multinode-405494
	I0116 03:11:08.332227  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494
	I0116 03:11:08.332257  491150 round_trippers.go:469] Request Headers:
	I0116 03:11:08.332269  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:11:08.332279  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:11:08.335966  491150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:11:08.335992  491150 round_trippers.go:577] Response Headers:
	I0116 03:11:08.336003  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:11:08.336011  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:11:08 GMT
	I0116 03:11:08.336019  491150 round_trippers.go:580]     Audit-Id: 04306df2-bfb1-406a-8c1c-c0e34afff9a0
	I0116 03:11:08.336057  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:11:08.336066  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:11:08.336074  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:11:08.337005  491150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","resourceVersion":"915","creationTimestamp":"2024-01-16T02:57:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_57_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:57:07Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I0116 03:11:08.337458  491150 pod_ready.go:92] pod "kube-proxy-gg8kv" in "kube-system" namespace has status "Ready":"True"
	I0116 03:11:08.337480  491150 pod_ready.go:81] duration metric: took 360.739673ms waiting for pod "kube-proxy-gg8kv" in "kube-system" namespace to be "Ready" ...
	I0116 03:11:08.337493  491150 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ghscp" in "kube-system" namespace to be "Ready" ...
	I0116 03:11:08.532464  491150 request.go:629] Waited for 194.89321ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ghscp
	I0116 03:11:08.532547  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ghscp
	I0116 03:11:08.532561  491150 round_trippers.go:469] Request Headers:
	I0116 03:11:08.532573  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:11:08.532588  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:11:08.535468  491150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:11:08.535505  491150 round_trippers.go:577] Response Headers:
	I0116 03:11:08.535516  491150 round_trippers.go:580]     Audit-Id: d6eda465-29aa-4def-b9f0-2657c0b99890
	I0116 03:11:08.535525  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:11:08.535533  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:11:08.535541  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:11:08.535549  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:11:08.535558  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:11:08 GMT
	I0116 03:11:08.535772  491150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-ghscp","generateName":"kube-proxy-","namespace":"kube-system","uid":"62b6191a-df8d-444d-9176-3f265fd2084d","resourceVersion":"1231","creationTimestamp":"2024-01-16T02:58:49Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"fdad84fd-2af6-4360-a899-0f70f257935c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:58:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fdad84fd-2af6-4360-a899-0f70f257935c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5887 chars]
	I0116 03:11:08.732173  491150 request.go:629] Waited for 195.869674ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/nodes/multinode-405494-m03
	I0116 03:11:08.732258  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494-m03
	I0116 03:11:08.732265  491150 round_trippers.go:469] Request Headers:
	I0116 03:11:08.732275  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:11:08.732288  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:11:08.735817  491150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:11:08.735847  491150 round_trippers.go:577] Response Headers:
	I0116 03:11:08.735859  491150 round_trippers.go:580]     Audit-Id: 9f748497-6a17-41db-a9cc-7e07e673c26a
	I0116 03:11:08.735867  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:11:08.735875  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:11:08.735883  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:11:08.735891  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:11:08.735899  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:11:08 GMT
	I0116 03:11:08.736071  491150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494-m03","uid":"1f238593-3a8c-44ce-8bab-97495d17a848","resourceVersion":"1227","creationTimestamp":"2024-01-16T03:11:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_11_07_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:11:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 4223 chars]
	I0116 03:11:08.932536  491150 request.go:629] Waited for 93.819937ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ghscp
	I0116 03:11:08.932613  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ghscp
	I0116 03:11:08.932618  491150 round_trippers.go:469] Request Headers:
	I0116 03:11:08.932627  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:11:08.932636  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:11:08.935992  491150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:11:08.936020  491150 round_trippers.go:577] Response Headers:
	I0116 03:11:08.936030  491150 round_trippers.go:580]     Audit-Id: 07c15a57-9e4b-4d90-b9d3-43da9984b5c4
	I0116 03:11:08.936057  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:11:08.936069  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:11:08.936076  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:11:08.936091  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:11:08.936098  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:11:08 GMT
	I0116 03:11:08.936709  491150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-ghscp","generateName":"kube-proxy-","namespace":"kube-system","uid":"62b6191a-df8d-444d-9176-3f265fd2084d","resourceVersion":"1245","creationTimestamp":"2024-01-16T02:58:49Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"fdad84fd-2af6-4360-a899-0f70f257935c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:58:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fdad84fd-2af6-4360-a899-0f70f257935c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5731 chars]
	I0116 03:11:09.132558  491150 request.go:629] Waited for 195.404413ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/nodes/multinode-405494-m03
	I0116 03:11:09.132645  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494-m03
	I0116 03:11:09.132650  491150 round_trippers.go:469] Request Headers:
	I0116 03:11:09.132659  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:11:09.132666  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:11:09.135564  491150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:11:09.135594  491150 round_trippers.go:577] Response Headers:
	I0116 03:11:09.135605  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:11:09.135615  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:11:09 GMT
	I0116 03:11:09.135630  491150 round_trippers.go:580]     Audit-Id: 9ce43c6b-a0f9-48c9-afb0-86d09521bf2e
	I0116 03:11:09.135641  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:11:09.135646  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:11:09.135652  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:11:09.135948  491150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494-m03","uid":"1f238593-3a8c-44ce-8bab-97495d17a848","resourceVersion":"1227","creationTimestamp":"2024-01-16T03:11:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_11_07_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:11:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 4223 chars]
	I0116 03:11:09.136330  491150 pod_ready.go:92] pod "kube-proxy-ghscp" in "kube-system" namespace has status "Ready":"True"
	I0116 03:11:09.136352  491150 pod_ready.go:81] duration metric: took 798.850908ms waiting for pod "kube-proxy-ghscp" in "kube-system" namespace to be "Ready" ...
	I0116 03:11:09.136363  491150 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-m46rb" in "kube-system" namespace to be "Ready" ...
	I0116 03:11:09.331770  491150 request.go:629] Waited for 195.315865ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m46rb
	I0116 03:11:09.331854  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m46rb
	I0116 03:11:09.331862  491150 round_trippers.go:469] Request Headers:
	I0116 03:11:09.331872  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:11:09.331882  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:11:09.335070  491150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:11:09.335103  491150 round_trippers.go:577] Response Headers:
	I0116 03:11:09.335114  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:11:09.335122  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:11:09.335130  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:11:09.335138  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:11:09 GMT
	I0116 03:11:09.335146  491150 round_trippers.go:580]     Audit-Id: faeb35aa-82f1-417a-916b-7ebf6fa0e871
	I0116 03:11:09.335155  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:11:09.335316  491150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-m46rb","generateName":"kube-proxy-","namespace":"kube-system","uid":"960fb4d4-836f-42c5-9d56-03daae9f5a12","resourceVersion":"1071","creationTimestamp":"2024-01-16T02:58:02Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"fdad84fd-2af6-4360-a899-0f70f257935c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:58:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fdad84fd-2af6-4360-a899-0f70f257935c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5727 chars]
	I0116 03:11:09.532030  491150 request.go:629] Waited for 196.189578ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/nodes/multinode-405494-m02
	I0116 03:11:09.532156  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494-m02
	I0116 03:11:09.532166  491150 round_trippers.go:469] Request Headers:
	I0116 03:11:09.532174  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:11:09.532183  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:11:09.535333  491150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:11:09.535359  491150 round_trippers.go:577] Response Headers:
	I0116 03:11:09.535370  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:11:09 GMT
	I0116 03:11:09.535377  491150 round_trippers.go:580]     Audit-Id: 0e6c78ac-def4-40cd-9514-463adce87704
	I0116 03:11:09.535384  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:11:09.535391  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:11:09.535398  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:11:09.535405  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:11:09.535599  491150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494-m02","uid":"90a1608a-dfc1-4cf0-9a8d-7faa9ad91c37","resourceVersion":"1226","creationTimestamp":"2024-01-16T03:09:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_11_07_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:09:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3993 chars]
	I0116 03:11:09.535903  491150 pod_ready.go:92] pod "kube-proxy-m46rb" in "kube-system" namespace has status "Ready":"True"
	I0116 03:11:09.535923  491150 pod_ready.go:81] duration metric: took 399.549097ms waiting for pod "kube-proxy-m46rb" in "kube-system" namespace to be "Ready" ...
	I0116 03:11:09.535944  491150 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-405494" in "kube-system" namespace to be "Ready" ...
	I0116 03:11:09.732117  491150 request.go:629] Waited for 196.084978ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-405494
	I0116 03:11:09.732196  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-405494
	I0116 03:11:09.732201  491150 round_trippers.go:469] Request Headers:
	I0116 03:11:09.732209  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:11:09.732216  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:11:09.735087  491150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:11:09.735111  491150 round_trippers.go:577] Response Headers:
	I0116 03:11:09.735122  491150 round_trippers.go:580]     Audit-Id: 9d67a25e-5495-429c-9112-5d4df0221227
	I0116 03:11:09.735130  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:11:09.735137  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:11:09.735145  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:11:09.735153  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:11:09.735161  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:11:09 GMT
	I0116 03:11:09.735505  491150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-405494","namespace":"kube-system","uid":"70c980cb-4ff9-45f5-960f-d8afa355229c","resourceVersion":"884","creationTimestamp":"2024-01-16T02:57:11Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"65069d20830c0b10a3d28746871e48c2","kubernetes.io/config.mirror":"65069d20830c0b10a3d28746871e48c2","kubernetes.io/config.seen":"2024-01-16T02:57:02.078604553Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:57:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4680 chars]
	I0116 03:11:09.932326  491150 request.go:629] Waited for 196.362493ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/nodes/multinode-405494
	I0116 03:11:09.932409  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/multinode-405494
	I0116 03:11:09.932424  491150 round_trippers.go:469] Request Headers:
	I0116 03:11:09.932434  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:11:09.932444  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:11:09.935208  491150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:11:09.935232  491150 round_trippers.go:577] Response Headers:
	I0116 03:11:09.935243  491150 round_trippers.go:580]     Audit-Id: 9cb75642-a6f3-436d-bda7-3d43503ac24d
	I0116 03:11:09.935251  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:11:09.935260  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:11:09.935268  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:11:09.935275  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:11:09.935284  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:11:09 GMT
	I0116 03:11:09.935844  491150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","resourceVersion":"915","creationTimestamp":"2024-01-16T02:57:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_57_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T02:57:07Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I0116 03:11:09.936256  491150 pod_ready.go:92] pod "kube-scheduler-multinode-405494" in "kube-system" namespace has status "Ready":"True"
	I0116 03:11:09.936278  491150 pod_ready.go:81] duration metric: took 400.320686ms waiting for pod "kube-scheduler-multinode-405494" in "kube-system" namespace to be "Ready" ...
	I0116 03:11:09.936292  491150 pod_ready.go:38] duration metric: took 2.00120211s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:11:09.936312  491150 system_svc.go:44] waiting for kubelet service to be running ....
	I0116 03:11:09.936373  491150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:11:09.949831  491150 system_svc.go:56] duration metric: took 13.508776ms WaitForService to wait for kubelet.
	I0116 03:11:09.949861  491150 kubeadm.go:581] duration metric: took 2.038463856s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0116 03:11:09.949886  491150 node_conditions.go:102] verifying NodePressure condition ...
	I0116 03:11:10.131638  491150 request.go:629] Waited for 181.635533ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/nodes
	I0116 03:11:10.131747  491150 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes
	I0116 03:11:10.131761  491150 round_trippers.go:469] Request Headers:
	I0116 03:11:10.131773  491150 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:11:10.131786  491150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0116 03:11:10.134862  491150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:11:10.134886  491150 round_trippers.go:577] Response Headers:
	I0116 03:11:10.134893  491150 round_trippers.go:580]     Audit-Id: 098d08c5-bf8d-44f1-930a-58e2bdff4e04
	I0116 03:11:10.134899  491150 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:11:10.134912  491150 round_trippers.go:580]     Content-Type: application/json
	I0116 03:11:10.134917  491150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2eb3a74d-95cc-441b-9581-edd9c8559d9a
	I0116 03:11:10.134922  491150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8cb5fba-ee3f-4ff3-9cc2-317cb6983ab2
	I0116 03:11:10.134927  491150 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:11:10 GMT
	I0116 03:11:10.135205  491150 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1251"},"items":[{"metadata":{"name":"multinode-405494","uid":"396eb6bf-1dc3-46c5-8016-dbf8af754fa2","resourceVersion":"915","creationTimestamp":"2024-01-16T02:57:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-405494","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-405494","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_57_12_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedField
s":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time": [truncated 16466 chars]
	I0116 03:11:10.135818  491150 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 03:11:10.135839  491150 node_conditions.go:123] node cpu capacity is 2
	I0116 03:11:10.135850  491150 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 03:11:10.135855  491150 node_conditions.go:123] node cpu capacity is 2
	I0116 03:11:10.135859  491150 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 03:11:10.135865  491150 node_conditions.go:123] node cpu capacity is 2
	I0116 03:11:10.135873  491150 node_conditions.go:105] duration metric: took 185.979894ms to run NodePressure ...
	I0116 03:11:10.135894  491150 start.go:228] waiting for startup goroutines ...
	I0116 03:11:10.135912  491150 start.go:242] writing updated cluster config ...
	I0116 03:11:10.136235  491150 ssh_runner.go:195] Run: rm -f paused
	I0116 03:11:10.187687  491150 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0116 03:11:10.191336  491150 out.go:177] * Done! kubectl is now configured to use "multinode-405494" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Tue 2024-01-16 03:06:52 UTC, ends at Tue 2024-01-16 03:11:11 UTC. --
	Jan 16 03:11:11 multinode-405494 crio[710]: time="2024-01-16 03:11:11.388882605Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705374671388861849,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=8a50dde1-2c77-4d31-9ebe-d4a841a17575 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 03:11:11 multinode-405494 crio[710]: time="2024-01-16 03:11:11.389392723Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f279dfe7-6973-4a7e-8a8c-ed54023e93d2 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:11:11 multinode-405494 crio[710]: time="2024-01-16 03:11:11.389462031Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f279dfe7-6973-4a7e-8a8c-ed54023e93d2 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:11:11 multinode-405494 crio[710]: time="2024-01-16 03:11:11.389741989Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cb602bdaabc6f336866ad5429a7339fbea2f26f418c3a4904367a18acd93cf34,PodSandboxId:24aad1a2c4795e5188b8db19a4319465b881ee7f3e198e78ae426eca07a67beb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705374468888471536,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6f12cfa-46b3-4840-a7e2-258c063a19c2,},Annotations:map[string]string{io.kubernetes.container.hash: 760f1fb4,io.kubernetes.container.restartCount: 3,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40ee165656d25b93e509d0b0c85022d250d63d93a328370e69accdaf5be5ee99,PodSandboxId:cd6825c0e7a113a26de38ec90e74ba6fd53888c18b6d456d6d879579271dd5d7,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1705374467797640914,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-r9bv6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 73a7a6a1-28ed-452e-8073-025f2e1289be,},Annotations:map[string]string{io.kubernetes.container.hash: 4950379d,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b86a2b278e45d7d530ca5a68263a3b0a6a1146901dad729f67379acd63497dfa,PodSandboxId:ece80990e433e1154c1c64201c137e49aa9051c84faf02bd0c082852ab5dd37c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705374466260462012,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-vwqvk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 096151e2-c59c-4dcf-bd29-2029901902c9,},Annotations:map[string]string{io.kubernetes.container.hash: 7c9940b4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca11c0a28ef1e201a42e7668714d90cf27698f74c492e680ea24ea2ad438728b,PodSandboxId:fdc81bb550126e9189ff1fd5a5457abe2de8690971c4fab0f3029c4146cfb831,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1705374453628028362,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8t86n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 4d421823-26dd-467d-94d4-28387c8e3793,},Annotations:map[string]string{io.kubernetes.container.hash: 2e973eea,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78fcf57887e8c06f78c935ee9136e99528c9ab92defe9cdf0b9d36b3bd4cf12c,PodSandboxId:24aad1a2c4795e5188b8db19a4319465b881ee7f3e198e78ae426eca07a67beb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1705374451946180985,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: c6f12cfa-46b3-4840-a7e2-258c063a19c2,},Annotations:map[string]string{io.kubernetes.container.hash: 760f1fb4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:734ada0e6e80ae5663b15aa948979e5f04c4d893a8d18393c3981d65b1422fa3,PodSandboxId:52a0effefec3d097e425b7ea1036ea047360ed0c70d12ec5deddb8d998516057,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705374451368904482,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gg8kv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32841b88-1b06-46ed-b4ce-f73301ec
0a85,},Annotations:map[string]string{io.kubernetes.container.hash: 3089e760,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:373beef48651fc180258aafdd447b85336f1af385531b8b6a6bf6c44e02d1222,PodSandboxId:cbc6b778d94a9e0c44367fcdfb285ac93c4dd98103e60e46ee250d373b675abc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705374444455913764,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-405494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65069d20830c0b10a3d28746871e48c2,},Annot
ations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:154192bc6973eee8b37ad510934b4a2d49209234dbf4bc0d79089517b8d264b1,PodSandboxId:63c559246e1b79c4035275cd4ddf26c01ab3f42c7f1288acc1a4fe637a5bbb6c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705374444197994103,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-405494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a9e67d0e87fe64d9531234ab850034d,},Annotations:map[string]string{io.kubernetes.container.has
h: 56e72d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf296b7da0081d50085828050c9dbf1aefffc157009c84b0baa48f7e2c1ffda9,PodSandboxId:f74469c88bdb8d9ae18f86d4a4921cb92c8308b155af95fa7eef7d3a3f3acefa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705374444279698495,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-405494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9eb78063d6e219f3cc5940494bdab4b2,},Annotations:map[string]string{io.ku
bernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:394700e51984706e2299da18df128406493acd293e75039969f294812deca71c,PodSandboxId:1cf8aab81c71ce9863f23ee6f1fb70e59fc19fc7f896b7f3243b069939940452,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705374443896385317,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-405494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04bffd1a6d3ee0aae068c41e37830c9b,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: aaf37b8e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f279dfe7-6973-4a7e-8a8c-ed54023e93d2 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:11:11 multinode-405494 crio[710]: time="2024-01-16 03:11:11.432030617Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=76b6114d-1b6b-4cff-a91b-dd649d043bcd name=/runtime.v1.RuntimeService/Version
	Jan 16 03:11:11 multinode-405494 crio[710]: time="2024-01-16 03:11:11.432138931Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=76b6114d-1b6b-4cff-a91b-dd649d043bcd name=/runtime.v1.RuntimeService/Version
	Jan 16 03:11:11 multinode-405494 crio[710]: time="2024-01-16 03:11:11.433697094Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=f4d1366d-2af2-4a88-9b6c-e4c768ff9548 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 03:11:11 multinode-405494 crio[710]: time="2024-01-16 03:11:11.434097979Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705374671434084041,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=f4d1366d-2af2-4a88-9b6c-e4c768ff9548 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 03:11:11 multinode-405494 crio[710]: time="2024-01-16 03:11:11.435066508Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=d86bf5af-c8da-490b-9649-095f0524da9b name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:11:11 multinode-405494 crio[710]: time="2024-01-16 03:11:11.435121039Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=d86bf5af-c8da-490b-9649-095f0524da9b name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:11:11 multinode-405494 crio[710]: time="2024-01-16 03:11:11.435351205Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cb602bdaabc6f336866ad5429a7339fbea2f26f418c3a4904367a18acd93cf34,PodSandboxId:24aad1a2c4795e5188b8db19a4319465b881ee7f3e198e78ae426eca07a67beb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705374468888471536,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6f12cfa-46b3-4840-a7e2-258c063a19c2,},Annotations:map[string]string{io.kubernetes.container.hash: 760f1fb4,io.kubernetes.container.restartCount: 3,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40ee165656d25b93e509d0b0c85022d250d63d93a328370e69accdaf5be5ee99,PodSandboxId:cd6825c0e7a113a26de38ec90e74ba6fd53888c18b6d456d6d879579271dd5d7,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1705374467797640914,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-r9bv6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 73a7a6a1-28ed-452e-8073-025f2e1289be,},Annotations:map[string]string{io.kubernetes.container.hash: 4950379d,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b86a2b278e45d7d530ca5a68263a3b0a6a1146901dad729f67379acd63497dfa,PodSandboxId:ece80990e433e1154c1c64201c137e49aa9051c84faf02bd0c082852ab5dd37c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705374466260462012,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-vwqvk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 096151e2-c59c-4dcf-bd29-2029901902c9,},Annotations:map[string]string{io.kubernetes.container.hash: 7c9940b4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca11c0a28ef1e201a42e7668714d90cf27698f74c492e680ea24ea2ad438728b,PodSandboxId:fdc81bb550126e9189ff1fd5a5457abe2de8690971c4fab0f3029c4146cfb831,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1705374453628028362,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8t86n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 4d421823-26dd-467d-94d4-28387c8e3793,},Annotations:map[string]string{io.kubernetes.container.hash: 2e973eea,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78fcf57887e8c06f78c935ee9136e99528c9ab92defe9cdf0b9d36b3bd4cf12c,PodSandboxId:24aad1a2c4795e5188b8db19a4319465b881ee7f3e198e78ae426eca07a67beb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1705374451946180985,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: c6f12cfa-46b3-4840-a7e2-258c063a19c2,},Annotations:map[string]string{io.kubernetes.container.hash: 760f1fb4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:734ada0e6e80ae5663b15aa948979e5f04c4d893a8d18393c3981d65b1422fa3,PodSandboxId:52a0effefec3d097e425b7ea1036ea047360ed0c70d12ec5deddb8d998516057,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705374451368904482,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gg8kv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32841b88-1b06-46ed-b4ce-f73301ec
0a85,},Annotations:map[string]string{io.kubernetes.container.hash: 3089e760,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:373beef48651fc180258aafdd447b85336f1af385531b8b6a6bf6c44e02d1222,PodSandboxId:cbc6b778d94a9e0c44367fcdfb285ac93c4dd98103e60e46ee250d373b675abc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705374444455913764,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-405494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65069d20830c0b10a3d28746871e48c2,},Annot
ations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:154192bc6973eee8b37ad510934b4a2d49209234dbf4bc0d79089517b8d264b1,PodSandboxId:63c559246e1b79c4035275cd4ddf26c01ab3f42c7f1288acc1a4fe637a5bbb6c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705374444197994103,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-405494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a9e67d0e87fe64d9531234ab850034d,},Annotations:map[string]string{io.kubernetes.container.has
h: 56e72d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf296b7da0081d50085828050c9dbf1aefffc157009c84b0baa48f7e2c1ffda9,PodSandboxId:f74469c88bdb8d9ae18f86d4a4921cb92c8308b155af95fa7eef7d3a3f3acefa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705374444279698495,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-405494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9eb78063d6e219f3cc5940494bdab4b2,},Annotations:map[string]string{io.ku
bernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:394700e51984706e2299da18df128406493acd293e75039969f294812deca71c,PodSandboxId:1cf8aab81c71ce9863f23ee6f1fb70e59fc19fc7f896b7f3243b069939940452,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705374443896385317,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-405494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04bffd1a6d3ee0aae068c41e37830c9b,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: aaf37b8e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=d86bf5af-c8da-490b-9649-095f0524da9b name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:11:11 multinode-405494 crio[710]: time="2024-01-16 03:11:11.476933826Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=acdbb080-071e-428c-a03f-bbf45b0c61c8 name=/runtime.v1.RuntimeService/Version
	Jan 16 03:11:11 multinode-405494 crio[710]: time="2024-01-16 03:11:11.477020482Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=acdbb080-071e-428c-a03f-bbf45b0c61c8 name=/runtime.v1.RuntimeService/Version
	Jan 16 03:11:11 multinode-405494 crio[710]: time="2024-01-16 03:11:11.479117262Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=17dab5ff-c591-415d-9cdb-21ef2a0e2641 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 03:11:11 multinode-405494 crio[710]: time="2024-01-16 03:11:11.479494112Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705374671479478828,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=17dab5ff-c591-415d-9cdb-21ef2a0e2641 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 03:11:11 multinode-405494 crio[710]: time="2024-01-16 03:11:11.480217843Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e71b3648-0c71-4a85-bfd5-d79c06146afc name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:11:11 multinode-405494 crio[710]: time="2024-01-16 03:11:11.480292124Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e71b3648-0c71-4a85-bfd5-d79c06146afc name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:11:11 multinode-405494 crio[710]: time="2024-01-16 03:11:11.480511817Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cb602bdaabc6f336866ad5429a7339fbea2f26f418c3a4904367a18acd93cf34,PodSandboxId:24aad1a2c4795e5188b8db19a4319465b881ee7f3e198e78ae426eca07a67beb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705374468888471536,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6f12cfa-46b3-4840-a7e2-258c063a19c2,},Annotations:map[string]string{io.kubernetes.container.hash: 760f1fb4,io.kubernetes.container.restartCount: 3,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40ee165656d25b93e509d0b0c85022d250d63d93a328370e69accdaf5be5ee99,PodSandboxId:cd6825c0e7a113a26de38ec90e74ba6fd53888c18b6d456d6d879579271dd5d7,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1705374467797640914,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-r9bv6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 73a7a6a1-28ed-452e-8073-025f2e1289be,},Annotations:map[string]string{io.kubernetes.container.hash: 4950379d,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b86a2b278e45d7d530ca5a68263a3b0a6a1146901dad729f67379acd63497dfa,PodSandboxId:ece80990e433e1154c1c64201c137e49aa9051c84faf02bd0c082852ab5dd37c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705374466260462012,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-vwqvk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 096151e2-c59c-4dcf-bd29-2029901902c9,},Annotations:map[string]string{io.kubernetes.container.hash: 7c9940b4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca11c0a28ef1e201a42e7668714d90cf27698f74c492e680ea24ea2ad438728b,PodSandboxId:fdc81bb550126e9189ff1fd5a5457abe2de8690971c4fab0f3029c4146cfb831,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1705374453628028362,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8t86n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 4d421823-26dd-467d-94d4-28387c8e3793,},Annotations:map[string]string{io.kubernetes.container.hash: 2e973eea,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78fcf57887e8c06f78c935ee9136e99528c9ab92defe9cdf0b9d36b3bd4cf12c,PodSandboxId:24aad1a2c4795e5188b8db19a4319465b881ee7f3e198e78ae426eca07a67beb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1705374451946180985,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: c6f12cfa-46b3-4840-a7e2-258c063a19c2,},Annotations:map[string]string{io.kubernetes.container.hash: 760f1fb4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:734ada0e6e80ae5663b15aa948979e5f04c4d893a8d18393c3981d65b1422fa3,PodSandboxId:52a0effefec3d097e425b7ea1036ea047360ed0c70d12ec5deddb8d998516057,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705374451368904482,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gg8kv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32841b88-1b06-46ed-b4ce-f73301ec
0a85,},Annotations:map[string]string{io.kubernetes.container.hash: 3089e760,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:373beef48651fc180258aafdd447b85336f1af385531b8b6a6bf6c44e02d1222,PodSandboxId:cbc6b778d94a9e0c44367fcdfb285ac93c4dd98103e60e46ee250d373b675abc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705374444455913764,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-405494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65069d20830c0b10a3d28746871e48c2,},Annot
ations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:154192bc6973eee8b37ad510934b4a2d49209234dbf4bc0d79089517b8d264b1,PodSandboxId:63c559246e1b79c4035275cd4ddf26c01ab3f42c7f1288acc1a4fe637a5bbb6c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705374444197994103,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-405494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a9e67d0e87fe64d9531234ab850034d,},Annotations:map[string]string{io.kubernetes.container.has
h: 56e72d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf296b7da0081d50085828050c9dbf1aefffc157009c84b0baa48f7e2c1ffda9,PodSandboxId:f74469c88bdb8d9ae18f86d4a4921cb92c8308b155af95fa7eef7d3a3f3acefa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705374444279698495,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-405494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9eb78063d6e219f3cc5940494bdab4b2,},Annotations:map[string]string{io.ku
bernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:394700e51984706e2299da18df128406493acd293e75039969f294812deca71c,PodSandboxId:1cf8aab81c71ce9863f23ee6f1fb70e59fc19fc7f896b7f3243b069939940452,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705374443896385317,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-405494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04bffd1a6d3ee0aae068c41e37830c9b,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: aaf37b8e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e71b3648-0c71-4a85-bfd5-d79c06146afc name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:11:11 multinode-405494 crio[710]: time="2024-01-16 03:11:11.525063011Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=b87ea5fb-a1ff-4d40-a9ce-7f69a534857c name=/runtime.v1.RuntimeService/Version
	Jan 16 03:11:11 multinode-405494 crio[710]: time="2024-01-16 03:11:11.525139038Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=b87ea5fb-a1ff-4d40-a9ce-7f69a534857c name=/runtime.v1.RuntimeService/Version
	Jan 16 03:11:11 multinode-405494 crio[710]: time="2024-01-16 03:11:11.526247835Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=677c61ca-a92f-452e-bd27-81b32370172b name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 03:11:11 multinode-405494 crio[710]: time="2024-01-16 03:11:11.526732245Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705374671526717269,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=677c61ca-a92f-452e-bd27-81b32370172b name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 03:11:11 multinode-405494 crio[710]: time="2024-01-16 03:11:11.527299307Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=d44f87ff-8f1e-4596-a26d-40e7e054af5d name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:11:11 multinode-405494 crio[710]: time="2024-01-16 03:11:11.527427637Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=d44f87ff-8f1e-4596-a26d-40e7e054af5d name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:11:11 multinode-405494 crio[710]: time="2024-01-16 03:11:11.527747047Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cb602bdaabc6f336866ad5429a7339fbea2f26f418c3a4904367a18acd93cf34,PodSandboxId:24aad1a2c4795e5188b8db19a4319465b881ee7f3e198e78ae426eca07a67beb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705374468888471536,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6f12cfa-46b3-4840-a7e2-258c063a19c2,},Annotations:map[string]string{io.kubernetes.container.hash: 760f1fb4,io.kubernetes.container.restartCount: 3,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40ee165656d25b93e509d0b0c85022d250d63d93a328370e69accdaf5be5ee99,PodSandboxId:cd6825c0e7a113a26de38ec90e74ba6fd53888c18b6d456d6d879579271dd5d7,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1705374467797640914,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-r9bv6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 73a7a6a1-28ed-452e-8073-025f2e1289be,},Annotations:map[string]string{io.kubernetes.container.hash: 4950379d,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b86a2b278e45d7d530ca5a68263a3b0a6a1146901dad729f67379acd63497dfa,PodSandboxId:ece80990e433e1154c1c64201c137e49aa9051c84faf02bd0c082852ab5dd37c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705374466260462012,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-vwqvk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 096151e2-c59c-4dcf-bd29-2029901902c9,},Annotations:map[string]string{io.kubernetes.container.hash: 7c9940b4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca11c0a28ef1e201a42e7668714d90cf27698f74c492e680ea24ea2ad438728b,PodSandboxId:fdc81bb550126e9189ff1fd5a5457abe2de8690971c4fab0f3029c4146cfb831,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1705374453628028362,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8t86n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 4d421823-26dd-467d-94d4-28387c8e3793,},Annotations:map[string]string{io.kubernetes.container.hash: 2e973eea,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78fcf57887e8c06f78c935ee9136e99528c9ab92defe9cdf0b9d36b3bd4cf12c,PodSandboxId:24aad1a2c4795e5188b8db19a4319465b881ee7f3e198e78ae426eca07a67beb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1705374451946180985,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: c6f12cfa-46b3-4840-a7e2-258c063a19c2,},Annotations:map[string]string{io.kubernetes.container.hash: 760f1fb4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:734ada0e6e80ae5663b15aa948979e5f04c4d893a8d18393c3981d65b1422fa3,PodSandboxId:52a0effefec3d097e425b7ea1036ea047360ed0c70d12ec5deddb8d998516057,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705374451368904482,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gg8kv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32841b88-1b06-46ed-b4ce-f73301ec
0a85,},Annotations:map[string]string{io.kubernetes.container.hash: 3089e760,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:373beef48651fc180258aafdd447b85336f1af385531b8b6a6bf6c44e02d1222,PodSandboxId:cbc6b778d94a9e0c44367fcdfb285ac93c4dd98103e60e46ee250d373b675abc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705374444455913764,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-405494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65069d20830c0b10a3d28746871e48c2,},Annot
ations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:154192bc6973eee8b37ad510934b4a2d49209234dbf4bc0d79089517b8d264b1,PodSandboxId:63c559246e1b79c4035275cd4ddf26c01ab3f42c7f1288acc1a4fe637a5bbb6c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705374444197994103,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-405494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a9e67d0e87fe64d9531234ab850034d,},Annotations:map[string]string{io.kubernetes.container.has
h: 56e72d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf296b7da0081d50085828050c9dbf1aefffc157009c84b0baa48f7e2c1ffda9,PodSandboxId:f74469c88bdb8d9ae18f86d4a4921cb92c8308b155af95fa7eef7d3a3f3acefa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705374444279698495,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-405494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9eb78063d6e219f3cc5940494bdab4b2,},Annotations:map[string]string{io.ku
bernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:394700e51984706e2299da18df128406493acd293e75039969f294812deca71c,PodSandboxId:1cf8aab81c71ce9863f23ee6f1fb70e59fc19fc7f896b7f3243b069939940452,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705374443896385317,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-405494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04bffd1a6d3ee0aae068c41e37830c9b,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: aaf37b8e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=d44f87ff-8f1e-4596-a26d-40e7e054af5d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	cb602bdaabc6f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       3                   24aad1a2c4795       storage-provisioner
	40ee165656d25       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   1                   cd6825c0e7a11       busybox-5bc68d56bd-r9bv6
	b86a2b278e45d       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      3 minutes ago       Running             coredns                   1                   ece80990e433e       coredns-5dd5756b68-vwqvk
	ca11c0a28ef1e       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                      3 minutes ago       Running             kindnet-cni               1                   fdc81bb550126       kindnet-8t86n
	78fcf57887e8c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Exited              storage-provisioner       2                   24aad1a2c4795       storage-provisioner
	734ada0e6e80a       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      3 minutes ago       Running             kube-proxy                1                   52a0effefec3d       kube-proxy-gg8kv
	373beef48651f       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      3 minutes ago       Running             kube-scheduler            1                   cbc6b778d94a9       kube-scheduler-multinode-405494
	bf296b7da0081       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      3 minutes ago       Running             kube-controller-manager   1                   f74469c88bdb8       kube-controller-manager-multinode-405494
	154192bc6973e       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      3 minutes ago       Running             etcd                      1                   63c559246e1b7       etcd-multinode-405494
	394700e519847       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      3 minutes ago       Running             kube-apiserver            1                   1cf8aab81c71c       kube-apiserver-multinode-405494
	
	
	==> coredns [b86a2b278e45d7d530ca5a68263a3b0a6a1146901dad729f67379acd63497dfa] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:59781 - 31064 "HINFO IN 4000599600653471247.2119427901353248597. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013505868s
	
	
	==> describe nodes <==
	Name:               multinode-405494
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-405494
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e8fa5f64d0e7272be43ff25ed3826261f0a2578
	                    minikube.k8s.io/name=multinode-405494
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_16T02_57_12_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Jan 2024 02:57:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-405494
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Jan 2024 03:11:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Jan 2024 03:07:59 +0000   Tue, 16 Jan 2024 02:57:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Jan 2024 03:07:59 +0000   Tue, 16 Jan 2024 02:57:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Jan 2024 03:07:59 +0000   Tue, 16 Jan 2024 02:57:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Jan 2024 03:07:59 +0000   Tue, 16 Jan 2024 03:07:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.70
	  Hostname:    multinode-405494
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 5f5f8b6b5e6a46f19cf1b016b5a8fabf
	  System UUID:                5f5f8b6b-5e6a-46f1-9cf1-b016b5a8fabf
	  Boot ID:                    f951b8b3-70e4-4cf0-9c6d-8641113e89fb
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-r9bv6                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 coredns-5dd5756b68-vwqvk                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-multinode-405494                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-8t86n                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-multinode-405494             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-multinode-405494    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-gg8kv                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-multinode-405494             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 13m                    kube-proxy       
	  Normal  Starting                 3m39s                  kube-proxy       
	  Normal  Starting                 14m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)      kubelet          Node multinode-405494 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)      kubelet          Node multinode-405494 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)      kubelet          Node multinode-405494 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     14m                    kubelet          Node multinode-405494 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  14m                    kubelet          Node multinode-405494 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m                    kubelet          Node multinode-405494 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 14m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           13m                    node-controller  Node multinode-405494 event: Registered Node multinode-405494 in Controller
	  Normal  NodeReady                13m                    kubelet          Node multinode-405494 status is now: NodeReady
	  Normal  Starting                 3m49s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m49s (x8 over 3m49s)  kubelet          Node multinode-405494 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m49s (x8 over 3m49s)  kubelet          Node multinode-405494 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m49s (x7 over 3m49s)  kubelet          Node multinode-405494 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m49s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m30s                  node-controller  Node multinode-405494 event: Registered Node multinode-405494 in Controller
	
	
	Name:               multinode-405494-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-405494-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e8fa5f64d0e7272be43ff25ed3826261f0a2578
	                    minikube.k8s.io/name=multinode-405494
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_01_16T03_11_07_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Jan 2024 03:09:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-405494-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Jan 2024 03:11:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Jan 2024 03:09:27 +0000   Tue, 16 Jan 2024 03:09:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Jan 2024 03:09:27 +0000   Tue, 16 Jan 2024 03:09:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Jan 2024 03:09:27 +0000   Tue, 16 Jan 2024 03:09:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Jan 2024 03:09:27 +0000   Tue, 16 Jan 2024 03:09:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.32
	  Hostname:    multinode-405494-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 c7373742aa884a86a0dc787cc32f209c
	  System UUID:                c7373742-aa88-4a86-a0dc-787cc32f209c
	  Boot ID:                    45dcb7fd-8b98-4f9a-94df-8cc9fd5728df
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-px5sw    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7s
	  kube-system                 kindnet-ddd2h               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-proxy-m46rb            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From        Message
	  ----     ------                   ----                   ----        -------
	  Normal   Starting                 13m                    kube-proxy  
	  Normal   Starting                 102s                   kube-proxy  
	  Normal   NodeHasSufficientMemory  13m (x5 over 13m)      kubelet     Node multinode-405494-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x5 over 13m)      kubelet     Node multinode-405494-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x5 over 13m)      kubelet     Node multinode-405494-m02 status is now: NodeHasSufficientPID
	  Normal   NodeReady                13m                    kubelet     Node multinode-405494-m02 status is now: NodeReady
	  Normal   NodeNotReady             2m45s                  kubelet     Node multinode-405494-m02 status is now: NodeNotReady
	  Warning  ContainerGCFailed        2m11s (x2 over 3m11s)  kubelet     rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   Starting                 105s                   kubelet     Starting kubelet.
	  Normal   NodeHasSufficientMemory  104s (x2 over 104s)    kubelet     Node multinode-405494-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    104s (x2 over 104s)    kubelet     Node multinode-405494-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     104s (x2 over 104s)    kubelet     Node multinode-405494-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  104s                   kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeReady                104s                   kubelet     Node multinode-405494-m02 status is now: NodeReady
	
	
	Name:               multinode-405494-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-405494-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e8fa5f64d0e7272be43ff25ed3826261f0a2578
	                    minikube.k8s.io/name=multinode-405494
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_01_16T03_11_07_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Jan 2024 03:11:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:              Failed to get lease: leases.coordination.k8s.io "multinode-405494-m03" not found
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Jan 2024 03:11:07 +0000   Tue, 16 Jan 2024 03:11:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Jan 2024 03:11:07 +0000   Tue, 16 Jan 2024 03:11:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Jan 2024 03:11:07 +0000   Tue, 16 Jan 2024 03:11:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Jan 2024 03:11:07 +0000   Tue, 16 Jan 2024 03:11:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.182
	  Hostname:    multinode-405494-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 4f43038d439c4a9783a34df09b2736bd
	  System UUID:                4f43038d-439c-4a97-83a3-4df09b2736bd
	  Boot ID:                    59f1b2dc-4f25-4eac-85b8-82e27ec9e48f
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-ltn29    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         108s
	  kube-system                 kindnet-6zhtt               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-proxy-ghscp            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                 From             Message
	  ----     ------                   ----                ----             -------
	  Normal   Starting                 11m                 kube-proxy       
	  Normal   Starting                 12m                 kube-proxy       
	  Normal   Starting                 3s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  12m (x5 over 12m)   kubelet          Node multinode-405494-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x5 over 12m)   kubelet          Node multinode-405494-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x5 over 12m)   kubelet          Node multinode-405494-m03 status is now: NodeHasSufficientPID
	  Normal   NodeReady                12m                 kubelet          Node multinode-405494-m03 status is now: NodeReady
	  Normal   NodeAllocatableEnforced  11m                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    11m (x2 over 11m)   kubelet          Node multinode-405494-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x2 over 11m)   kubelet          Node multinode-405494-m03 status is now: NodeHasSufficientPID
	  Normal   Starting                 11m                 kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  11m (x2 over 11m)   kubelet          Node multinode-405494-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeReady                11m                 kubelet          Node multinode-405494-m03 status is now: NodeReady
	  Normal   NodeNotReady             74s                 kubelet          Node multinode-405494-m03 status is now: NodeNotReady
	  Warning  ContainerGCFailed        40s (x2 over 100s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   Starting                 5s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  5s (x2 over 5s)     kubelet          Node multinode-405494-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5s (x2 over 5s)     kubelet          Node multinode-405494-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5s (x2 over 5s)     kubelet          Node multinode-405494-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  5s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeReady                4s                  kubelet          Node multinode-405494-m03 status is now: NodeReady
	  Normal   RegisteredNode           0s                  node-controller  Node multinode-405494-m03 event: Registered Node multinode-405494-m03 in Controller
	
	
	==> dmesg <==
	[Jan16 03:06] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.069646] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.461541] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.441970] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.153705] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.499247] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000046] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jan16 03:07] systemd-fstab-generator[635]: Ignoring "noauto" for root device
	[  +0.114806] systemd-fstab-generator[646]: Ignoring "noauto" for root device
	[  +0.161235] systemd-fstab-generator[659]: Ignoring "noauto" for root device
	[  +0.124463] systemd-fstab-generator[670]: Ignoring "noauto" for root device
	[  +0.246727] systemd-fstab-generator[695]: Ignoring "noauto" for root device
	[ +17.204296] systemd-fstab-generator[910]: Ignoring "noauto" for root device
	[ +18.795090] kauditd_printk_skb: 18 callbacks suppressed
	
	
	==> etcd [154192bc6973eee8b37ad510934b4a2d49209234dbf4bc0d79089517b8d264b1] <==
	{"level":"info","ts":"2024-01-16T03:07:25.915782Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-16T03:07:25.91579Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-16T03:07:25.921436Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-01-16T03:07:25.924875Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"d9e0442f914d2c09","initial-advertise-peer-urls":["https://192.168.39.70:2380"],"listen-peer-urls":["https://192.168.39.70:2380"],"advertise-client-urls":["https://192.168.39.70:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.70:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-01-16T03:07:25.924946Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-01-16T03:07:25.921778Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d9e0442f914d2c09 switched to configuration voters=(15699623272105454601)"}
	{"level":"info","ts":"2024-01-16T03:07:25.925079Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"b9ca18127a3e3182","local-member-id":"d9e0442f914d2c09","added-peer-id":"d9e0442f914d2c09","added-peer-peer-urls":["https://192.168.39.70:2380"]}
	{"level":"info","ts":"2024-01-16T03:07:25.925182Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b9ca18127a3e3182","local-member-id":"d9e0442f914d2c09","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-16T03:07:25.925221Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-16T03:07:25.921909Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.70:2380"}
	{"level":"info","ts":"2024-01-16T03:07:25.932778Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.70:2380"}
	{"level":"info","ts":"2024-01-16T03:07:27.336733Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d9e0442f914d2c09 is starting a new election at term 2"}
	{"level":"info","ts":"2024-01-16T03:07:27.336787Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d9e0442f914d2c09 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-01-16T03:07:27.33683Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d9e0442f914d2c09 received MsgPreVoteResp from d9e0442f914d2c09 at term 2"}
	{"level":"info","ts":"2024-01-16T03:07:27.336857Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d9e0442f914d2c09 became candidate at term 3"}
	{"level":"info","ts":"2024-01-16T03:07:27.336862Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d9e0442f914d2c09 received MsgVoteResp from d9e0442f914d2c09 at term 3"}
	{"level":"info","ts":"2024-01-16T03:07:27.336874Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d9e0442f914d2c09 became leader at term 3"}
	{"level":"info","ts":"2024-01-16T03:07:27.336916Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d9e0442f914d2c09 elected leader d9e0442f914d2c09 at term 3"}
	{"level":"info","ts":"2024-01-16T03:07:27.340363Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-16T03:07:27.341277Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"d9e0442f914d2c09","local-member-attributes":"{Name:multinode-405494 ClientURLs:[https://192.168.39.70:2379]}","request-path":"/0/members/d9e0442f914d2c09/attributes","cluster-id":"b9ca18127a3e3182","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-16T03:07:27.341496Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-16T03:07:27.342447Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.70:2379"}
	{"level":"info","ts":"2024-01-16T03:07:27.343044Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-16T03:07:27.343231Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-16T03:07:27.343274Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 03:11:11 up 4 min,  0 users,  load average: 0.20, 0.19, 0.09
	Linux multinode-405494 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kindnet [ca11c0a28ef1e201a42e7668714d90cf27698f74c492e680ea24ea2ad438728b] <==
	I0116 03:10:25.269971       1 main.go:250] Node multinode-405494-m03 has CIDR [10.244.3.0/24] 
	I0116 03:10:35.281362       1 main.go:223] Handling node with IPs: map[192.168.39.70:{}]
	I0116 03:10:35.281419       1 main.go:227] handling current node
	I0116 03:10:35.281443       1 main.go:223] Handling node with IPs: map[192.168.39.32:{}]
	I0116 03:10:35.281449       1 main.go:250] Node multinode-405494-m02 has CIDR [10.244.1.0/24] 
	I0116 03:10:35.281627       1 main.go:223] Handling node with IPs: map[192.168.39.182:{}]
	I0116 03:10:35.281664       1 main.go:250] Node multinode-405494-m03 has CIDR [10.244.3.0/24] 
	I0116 03:10:45.287245       1 main.go:223] Handling node with IPs: map[192.168.39.70:{}]
	I0116 03:10:45.287348       1 main.go:227] handling current node
	I0116 03:10:45.287377       1 main.go:223] Handling node with IPs: map[192.168.39.32:{}]
	I0116 03:10:45.287416       1 main.go:250] Node multinode-405494-m02 has CIDR [10.244.1.0/24] 
	I0116 03:10:45.287528       1 main.go:223] Handling node with IPs: map[192.168.39.182:{}]
	I0116 03:10:45.287549       1 main.go:250] Node multinode-405494-m03 has CIDR [10.244.3.0/24] 
	I0116 03:10:55.293242       1 main.go:223] Handling node with IPs: map[192.168.39.70:{}]
	I0116 03:10:55.293376       1 main.go:227] handling current node
	I0116 03:10:55.293407       1 main.go:223] Handling node with IPs: map[192.168.39.32:{}]
	I0116 03:10:55.293427       1 main.go:250] Node multinode-405494-m02 has CIDR [10.244.1.0/24] 
	I0116 03:10:55.293651       1 main.go:223] Handling node with IPs: map[192.168.39.182:{}]
	I0116 03:10:55.293695       1 main.go:250] Node multinode-405494-m03 has CIDR [10.244.3.0/24] 
	I0116 03:11:05.300966       1 main.go:223] Handling node with IPs: map[192.168.39.70:{}]
	I0116 03:11:05.301052       1 main.go:227] handling current node
	I0116 03:11:05.301075       1 main.go:223] Handling node with IPs: map[192.168.39.32:{}]
	I0116 03:11:05.301092       1 main.go:250] Node multinode-405494-m02 has CIDR [10.244.1.0/24] 
	I0116 03:11:05.301206       1 main.go:223] Handling node with IPs: map[192.168.39.182:{}]
	I0116 03:11:05.301226       1 main.go:250] Node multinode-405494-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [394700e51984706e2299da18df128406493acd293e75039969f294812deca71c] <==
	I0116 03:07:28.732133       1 controller.go:116] Starting legacy_token_tracking_controller
	I0116 03:07:28.743034       1 shared_informer.go:311] Waiting for caches to sync for configmaps
	I0116 03:07:28.727764       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0116 03:07:28.731001       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0116 03:07:28.885061       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0116 03:07:28.933517       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0116 03:07:28.933697       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0116 03:07:28.933791       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0116 03:07:28.934186       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0116 03:07:28.934314       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0116 03:07:28.935649       1 aggregator.go:166] initial CRD sync complete...
	I0116 03:07:28.935789       1 autoregister_controller.go:141] Starting autoregister controller
	I0116 03:07:28.935816       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0116 03:07:28.935840       1 cache.go:39] Caches are synced for autoregister controller
	I0116 03:07:28.940441       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0116 03:07:28.943211       1 shared_informer.go:318] Caches are synced for configmaps
	I0116 03:07:28.976782       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0116 03:07:29.738326       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0116 03:07:31.529340       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0116 03:07:31.814310       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0116 03:07:31.827515       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0116 03:07:31.979772       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0116 03:07:31.997416       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0116 03:07:41.248184       1 controller.go:624] quota admission added evaluator for: endpoints
	I0116 03:07:41.298752       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [bf296b7da0081d50085828050c9dbf1aefffc157009c84b0baa48f7e2c1ffda9] <==
	I0116 03:09:27.073155       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-405494-m02\" does not exist"
	I0116 03:09:27.074182       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-pkhcp" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-pkhcp"
	I0116 03:09:27.094409       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-405494-m02" podCIDRs=["10.244.1.0/24"]
	I0116 03:09:27.220554       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-405494-m02"
	I0116 03:09:27.999287       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="97.56µs"
	I0116 03:09:41.268100       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="85.426µs"
	I0116 03:09:41.828676       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="160.965µs"
	I0116 03:09:41.837899       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="117.928µs"
	I0116 03:09:57.739434       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-405494-m02"
	I0116 03:11:04.021803       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-px5sw"
	I0116 03:11:04.045870       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="39.409406ms"
	I0116 03:11:04.072083       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="26.05105ms"
	I0116 03:11:04.072261       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="62.521µs"
	I0116 03:11:04.072882       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="110.65µs"
	I0116 03:11:06.031091       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-405494-m02"
	I0116 03:11:06.117165       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="11.053804ms"
	I0116 03:11:06.118343       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="284.045µs"
	I0116 03:11:06.268360       1 event.go:307] "Event occurred" object="multinode-405494-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-405494-m03 event: Removing Node multinode-405494-m03 from Controller"
	I0116 03:11:06.784848       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-405494-m02"
	I0116 03:11:06.787295       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-405494-m03\" does not exist"
	I0116 03:11:06.787758       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-ltn29" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-ltn29"
	I0116 03:11:06.809739       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-405494-m03" podCIDRs=["10.244.2.0/24"]
	I0116 03:11:07.139755       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-405494-m02"
	I0116 03:11:07.756764       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="53.86µs"
	I0116 03:11:11.269238       1 event.go:307] "Event occurred" object="multinode-405494-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-405494-m03 event: Registered Node multinode-405494-m03 in Controller"
	
	
	==> kube-proxy [734ada0e6e80ae5663b15aa948979e5f04c4d893a8d18393c3981d65b1422fa3] <==
	I0116 03:07:31.893309       1 server_others.go:69] "Using iptables proxy"
	I0116 03:07:31.943123       1 node.go:141] Successfully retrieved node IP: 192.168.39.70
	I0116 03:07:32.114071       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0116 03:07:32.114175       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0116 03:07:32.120060       1 server_others.go:152] "Using iptables Proxier"
	I0116 03:07:32.120221       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0116 03:07:32.120758       1 server.go:846] "Version info" version="v1.28.4"
	I0116 03:07:32.120860       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0116 03:07:32.125999       1 config.go:188] "Starting service config controller"
	I0116 03:07:32.126054       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0116 03:07:32.126095       1 config.go:97] "Starting endpoint slice config controller"
	I0116 03:07:32.126126       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0116 03:07:32.127979       1 config.go:315] "Starting node config controller"
	I0116 03:07:32.128023       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0116 03:07:32.226439       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0116 03:07:32.226467       1 shared_informer.go:318] Caches are synced for service config
	I0116 03:07:32.228101       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [373beef48651fc180258aafdd447b85336f1af385531b8b6a6bf6c44e02d1222] <==
	I0116 03:07:26.486214       1 serving.go:348] Generated self-signed cert in-memory
	W0116 03:07:28.829139       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0116 03:07:28.829187       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0116 03:07:28.829199       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0116 03:07:28.829205       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0116 03:07:28.891495       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0116 03:07:28.891705       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0116 03:07:28.893190       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0116 03:07:28.893240       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0116 03:07:28.894054       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0116 03:07:28.894117       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0116 03:07:28.994653       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-01-16 03:06:52 UTC, ends at Tue 2024-01-16 03:11:12 UTC. --
	Jan 16 03:07:33 multinode-405494 kubelet[916]: E0116 03:07:33.495376     916 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/73a7a6a1-28ed-452e-8073-025f2e1289be-kube-api-access-8nvjc podName:73a7a6a1-28ed-452e-8073-025f2e1289be nodeName:}" failed. No retries permitted until 2024-01-16 03:07:37.495361906 +0000 UTC m=+14.920256175 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-8nvjc" (UniqueName: "kubernetes.io/projected/73a7a6a1-28ed-452e-8073-025f2e1289be-kube-api-access-8nvjc") pod "busybox-5bc68d56bd-r9bv6" (UID: "73a7a6a1-28ed-452e-8073-025f2e1289be") : object "default"/"kube-root-ca.crt" not registered
	Jan 16 03:07:33 multinode-405494 kubelet[916]: E0116 03:07:33.862541     916 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-5dd5756b68-vwqvk" podUID="096151e2-c59c-4dcf-bd29-2029901902c9"
	Jan 16 03:07:33 multinode-405494 kubelet[916]: E0116 03:07:33.862790     916 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="default/busybox-5bc68d56bd-r9bv6" podUID="73a7a6a1-28ed-452e-8073-025f2e1289be"
	Jan 16 03:07:35 multinode-405494 kubelet[916]: E0116 03:07:35.862887     916 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="default/busybox-5bc68d56bd-r9bv6" podUID="73a7a6a1-28ed-452e-8073-025f2e1289be"
	Jan 16 03:07:35 multinode-405494 kubelet[916]: E0116 03:07:35.862985     916 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-5dd5756b68-vwqvk" podUID="096151e2-c59c-4dcf-bd29-2029901902c9"
	Jan 16 03:07:37 multinode-405494 kubelet[916]: E0116 03:07:37.428389     916 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jan 16 03:07:37 multinode-405494 kubelet[916]: E0116 03:07:37.428503     916 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/096151e2-c59c-4dcf-bd29-2029901902c9-config-volume podName:096151e2-c59c-4dcf-bd29-2029901902c9 nodeName:}" failed. No retries permitted until 2024-01-16 03:07:45.428484954 +0000 UTC m=+22.853379210 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/096151e2-c59c-4dcf-bd29-2029901902c9-config-volume") pod "coredns-5dd5756b68-vwqvk" (UID: "096151e2-c59c-4dcf-bd29-2029901902c9") : object "kube-system"/"coredns" not registered
	Jan 16 03:07:37 multinode-405494 kubelet[916]: E0116 03:07:37.528999     916 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Jan 16 03:07:37 multinode-405494 kubelet[916]: E0116 03:07:37.529068     916 projected.go:198] Error preparing data for projected volume kube-api-access-8nvjc for pod default/busybox-5bc68d56bd-r9bv6: object "default"/"kube-root-ca.crt" not registered
	Jan 16 03:07:37 multinode-405494 kubelet[916]: E0116 03:07:37.529146     916 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/73a7a6a1-28ed-452e-8073-025f2e1289be-kube-api-access-8nvjc podName:73a7a6a1-28ed-452e-8073-025f2e1289be nodeName:}" failed. No retries permitted until 2024-01-16 03:07:45.529130202 +0000 UTC m=+22.954024458 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-8nvjc" (UniqueName: "kubernetes.io/projected/73a7a6a1-28ed-452e-8073-025f2e1289be-kube-api-access-8nvjc") pod "busybox-5bc68d56bd-r9bv6" (UID: "73a7a6a1-28ed-452e-8073-025f2e1289be") : object "default"/"kube-root-ca.crt" not registered
	Jan 16 03:07:37 multinode-405494 kubelet[916]: E0116 03:07:37.861997     916 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="default/busybox-5bc68d56bd-r9bv6" podUID="73a7a6a1-28ed-452e-8073-025f2e1289be"
	Jan 16 03:07:37 multinode-405494 kubelet[916]: E0116 03:07:37.862071     916 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-5dd5756b68-vwqvk" podUID="096151e2-c59c-4dcf-bd29-2029901902c9"
	Jan 16 03:07:48 multinode-405494 kubelet[916]: I0116 03:07:48.862472     916 scope.go:117] "RemoveContainer" containerID="78fcf57887e8c06f78c935ee9136e99528c9ab92defe9cdf0b9d36b3bd4cf12c"
	Jan 16 03:08:22 multinode-405494 kubelet[916]: E0116 03:08:22.894791     916 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 16 03:08:22 multinode-405494 kubelet[916]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 16 03:08:22 multinode-405494 kubelet[916]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 16 03:08:22 multinode-405494 kubelet[916]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 16 03:09:22 multinode-405494 kubelet[916]: E0116 03:09:22.897271     916 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 16 03:09:22 multinode-405494 kubelet[916]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 16 03:09:22 multinode-405494 kubelet[916]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 16 03:09:22 multinode-405494 kubelet[916]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 16 03:10:22 multinode-405494 kubelet[916]: E0116 03:10:22.895914     916 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 16 03:10:22 multinode-405494 kubelet[916]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 16 03:10:22 multinode-405494 kubelet[916]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 16 03:10:22 multinode-405494 kubelet[916]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-405494 -n multinode-405494
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-405494 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (690.76s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (143.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-linux-amd64 -p multinode-405494 stop
E0116 03:11:49.160850  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/functional-193417/client.crt: no such file or directory
E0116 03:12:19.246396  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/addons-690916/client.crt: no such file or directory
multinode_test.go:342: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-405494 stop: exit status 82 (2m1.71737651s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-405494"  ...
	* Stopping node "multinode-405494"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:344: node stop returned an error. args "out/minikube-linux-amd64 -p multinode-405494 stop": exit status 82
multinode_test.go:348: (dbg) Run:  out/minikube-linux-amd64 -p multinode-405494 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-405494 status: exit status 3 (18.81877638s)

                                                
                                                
-- stdout --
	multinode-405494
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	multinode-405494-m02
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0116 03:13:35.132387  493382 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.70:22: connect: no route to host
	E0116 03:13:35.132431  493382 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.70:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:351: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-405494 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-405494 -n multinode-405494
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p multinode-405494 -n multinode-405494: exit status 3 (3.190884244s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0116 03:13:38.492469  493483 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.70:22: connect: no route to host
	E0116 03:13:38.492492  493483 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.70:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "multinode-405494" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiNode/serial/StopMultiNode (143.73s)

                                                
                                    
x
+
TestPreload (200.07s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-281181 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0116 03:22:19.246763  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/addons-690916/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-281181 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (1m53.887571583s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-281181 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-281181 image pull gcr.io/k8s-minikube/busybox: (1.162834881s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-281181
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-281181: (7.115865013s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-281181 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
E0116 03:24:18.183143  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/ingress-addon-legacy-873808/client.crt: no such file or directory
E0116 03:24:52.210309  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/functional-193417/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-281181 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m14.654232001s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-281181 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:523: *** TestPreload FAILED at 2024-01-16 03:25:17.760583708 +0000 UTC m=+3061.698806329
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-281181 -n test-preload-281181
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-281181 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-281181 logs -n 25: (1.246521214s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-405494 ssh -n                                                                 | multinode-405494     | jenkins | v1.32.0 | 16 Jan 24 02:59 UTC | 16 Jan 24 02:59 UTC |
	|         | multinode-405494-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-405494 ssh -n multinode-405494 sudo cat                                       | multinode-405494     | jenkins | v1.32.0 | 16 Jan 24 02:59 UTC | 16 Jan 24 02:59 UTC |
	|         | /home/docker/cp-test_multinode-405494-m03_multinode-405494.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-405494 cp multinode-405494-m03:/home/docker/cp-test.txt                       | multinode-405494     | jenkins | v1.32.0 | 16 Jan 24 02:59 UTC | 16 Jan 24 02:59 UTC |
	|         | multinode-405494-m02:/home/docker/cp-test_multinode-405494-m03_multinode-405494-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-405494 ssh -n                                                                 | multinode-405494     | jenkins | v1.32.0 | 16 Jan 24 02:59 UTC | 16 Jan 24 02:59 UTC |
	|         | multinode-405494-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-405494 ssh -n multinode-405494-m02 sudo cat                                   | multinode-405494     | jenkins | v1.32.0 | 16 Jan 24 02:59 UTC | 16 Jan 24 02:59 UTC |
	|         | /home/docker/cp-test_multinode-405494-m03_multinode-405494-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-405494 node stop m03                                                          | multinode-405494     | jenkins | v1.32.0 | 16 Jan 24 02:59 UTC | 16 Jan 24 02:59 UTC |
	| node    | multinode-405494 node start                                                             | multinode-405494     | jenkins | v1.32.0 | 16 Jan 24 02:59 UTC | 16 Jan 24 02:59 UTC |
	|         | m03 --alsologtostderr                                                                   |                      |         |         |                     |                     |
	| node    | list -p multinode-405494                                                                | multinode-405494     | jenkins | v1.32.0 | 16 Jan 24 02:59 UTC |                     |
	| stop    | -p multinode-405494                                                                     | multinode-405494     | jenkins | v1.32.0 | 16 Jan 24 02:59 UTC |                     |
	| start   | -p multinode-405494                                                                     | multinode-405494     | jenkins | v1.32.0 | 16 Jan 24 03:01 UTC | 16 Jan 24 03:11 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-405494                                                                | multinode-405494     | jenkins | v1.32.0 | 16 Jan 24 03:11 UTC |                     |
	| node    | multinode-405494 node delete                                                            | multinode-405494     | jenkins | v1.32.0 | 16 Jan 24 03:11 UTC | 16 Jan 24 03:11 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-405494 stop                                                                   | multinode-405494     | jenkins | v1.32.0 | 16 Jan 24 03:11 UTC |                     |
	| start   | -p multinode-405494                                                                     | multinode-405494     | jenkins | v1.32.0 | 16 Jan 24 03:13 UTC | 16 Jan 24 03:21 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-405494                                                                | multinode-405494     | jenkins | v1.32.0 | 16 Jan 24 03:21 UTC |                     |
	| start   | -p multinode-405494-m02                                                                 | multinode-405494-m02 | jenkins | v1.32.0 | 16 Jan 24 03:21 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-405494-m03                                                                 | multinode-405494-m03 | jenkins | v1.32.0 | 16 Jan 24 03:21 UTC | 16 Jan 24 03:21 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-405494                                                                 | multinode-405494     | jenkins | v1.32.0 | 16 Jan 24 03:21 UTC |                     |
	| delete  | -p multinode-405494-m03                                                                 | multinode-405494-m03 | jenkins | v1.32.0 | 16 Jan 24 03:21 UTC | 16 Jan 24 03:21 UTC |
	| delete  | -p multinode-405494                                                                     | multinode-405494     | jenkins | v1.32.0 | 16 Jan 24 03:21 UTC | 16 Jan 24 03:22 UTC |
	| start   | -p test-preload-281181                                                                  | test-preload-281181  | jenkins | v1.32.0 | 16 Jan 24 03:22 UTC | 16 Jan 24 03:23 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-281181 image pull                                                          | test-preload-281181  | jenkins | v1.32.0 | 16 Jan 24 03:23 UTC | 16 Jan 24 03:23 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-281181                                                                  | test-preload-281181  | jenkins | v1.32.0 | 16 Jan 24 03:23 UTC | 16 Jan 24 03:24 UTC |
	| start   | -p test-preload-281181                                                                  | test-preload-281181  | jenkins | v1.32.0 | 16 Jan 24 03:24 UTC | 16 Jan 24 03:25 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-281181 image list                                                          | test-preload-281181  | jenkins | v1.32.0 | 16 Jan 24 03:25 UTC | 16 Jan 24 03:25 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/16 03:24:02
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0116 03:24:02.908018  496516 out.go:296] Setting OutFile to fd 1 ...
	I0116 03:24:02.908235  496516 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:24:02.908244  496516 out.go:309] Setting ErrFile to fd 2...
	I0116 03:24:02.908248  496516 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:24:02.908421  496516 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17965-468241/.minikube/bin
	I0116 03:24:02.908951  496516 out.go:303] Setting JSON to false
	I0116 03:24:02.909995  496516 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":14795,"bootTime":1705360648,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0116 03:24:02.910063  496516 start.go:138] virtualization: kvm guest
	I0116 03:24:02.912724  496516 out.go:177] * [test-preload-281181] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0116 03:24:02.914377  496516 notify.go:220] Checking for updates...
	I0116 03:24:02.914399  496516 out.go:177]   - MINIKUBE_LOCATION=17965
	I0116 03:24:02.916172  496516 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 03:24:02.917895  496516 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17965-468241/kubeconfig
	I0116 03:24:02.919551  496516 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17965-468241/.minikube
	I0116 03:24:02.921140  496516 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0116 03:24:02.922637  496516 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0116 03:24:02.924483  496516 config.go:182] Loaded profile config "test-preload-281181": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0116 03:24:02.924991  496516 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:24:02.925056  496516 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:24:02.939632  496516 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39603
	I0116 03:24:02.940146  496516 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:24:02.940952  496516 main.go:141] libmachine: Using API Version  1
	I0116 03:24:02.940975  496516 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:24:02.941357  496516 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:24:02.941598  496516 main.go:141] libmachine: (test-preload-281181) Calling .DriverName
	I0116 03:24:02.944014  496516 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0116 03:24:02.945544  496516 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 03:24:02.945853  496516 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:24:02.945891  496516 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:24:02.960704  496516 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40137
	I0116 03:24:02.961142  496516 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:24:02.961633  496516 main.go:141] libmachine: Using API Version  1
	I0116 03:24:02.961656  496516 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:24:02.962022  496516 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:24:02.962242  496516 main.go:141] libmachine: (test-preload-281181) Calling .DriverName
	I0116 03:24:02.999475  496516 out.go:177] * Using the kvm2 driver based on existing profile
	I0116 03:24:03.001127  496516 start.go:298] selected driver: kvm2
	I0116 03:24:03.001152  496516 start.go:902] validating driver "kvm2" against &{Name:test-preload-281181 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-281181 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host M
ount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 03:24:03.001262  496516 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0116 03:24:03.002034  496516 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 03:24:03.002113  496516 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17965-468241/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0116 03:24:03.017620  496516 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0116 03:24:03.017951  496516 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0116 03:24:03.018015  496516 cni.go:84] Creating CNI manager for ""
	I0116 03:24:03.018029  496516 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:24:03.018041  496516 start_flags.go:321] config:
	{Name:test-preload-281181 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-281181 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 03:24:03.018248  496516 iso.go:125] acquiring lock: {Name:mk2f3231f0eeeb23816ac363851489181398d1c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 03:24:03.021689  496516 out.go:177] * Starting control plane node test-preload-281181 in cluster test-preload-281181
	I0116 03:24:03.023378  496516 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0116 03:24:03.042414  496516 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0116 03:24:03.042457  496516 cache.go:56] Caching tarball of preloaded images
	I0116 03:24:03.042650  496516 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0116 03:24:03.044829  496516 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0116 03:24:03.046117  496516 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0116 03:24:03.077486  496516 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/17965-468241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0116 03:24:06.893935  496516 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0116 03:24:06.894048  496516 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17965-468241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0116 03:24:07.820322  496516 cache.go:59] Finished verifying existence of preloaded tar for  v1.24.4 on crio
	I0116 03:24:07.820502  496516 profile.go:148] Saving config to /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/test-preload-281181/config.json ...
	I0116 03:24:07.820739  496516 start.go:365] acquiring machines lock for test-preload-281181: {Name:mk901e5fae8c1a578d989b520053c709bdbdcf06 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0116 03:24:07.820817  496516 start.go:369] acquired machines lock for "test-preload-281181" in 52.614µs
	I0116 03:24:07.820835  496516 start.go:96] Skipping create...Using existing machine configuration
	I0116 03:24:07.820853  496516 fix.go:54] fixHost starting: 
	I0116 03:24:07.821127  496516 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:24:07.821171  496516 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:24:07.835891  496516 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43975
	I0116 03:24:07.836400  496516 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:24:07.836950  496516 main.go:141] libmachine: Using API Version  1
	I0116 03:24:07.836981  496516 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:24:07.837340  496516 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:24:07.837590  496516 main.go:141] libmachine: (test-preload-281181) Calling .DriverName
	I0116 03:24:07.837796  496516 main.go:141] libmachine: (test-preload-281181) Calling .GetState
	I0116 03:24:07.839634  496516 fix.go:102] recreateIfNeeded on test-preload-281181: state=Stopped err=<nil>
	I0116 03:24:07.839689  496516 main.go:141] libmachine: (test-preload-281181) Calling .DriverName
	W0116 03:24:07.839883  496516 fix.go:128] unexpected machine state, will restart: <nil>
	I0116 03:24:07.842278  496516 out.go:177] * Restarting existing kvm2 VM for "test-preload-281181" ...
	I0116 03:24:07.844109  496516 main.go:141] libmachine: (test-preload-281181) Calling .Start
	I0116 03:24:07.844384  496516 main.go:141] libmachine: (test-preload-281181) Ensuring networks are active...
	I0116 03:24:07.845281  496516 main.go:141] libmachine: (test-preload-281181) Ensuring network default is active
	I0116 03:24:07.845672  496516 main.go:141] libmachine: (test-preload-281181) Ensuring network mk-test-preload-281181 is active
	I0116 03:24:07.846001  496516 main.go:141] libmachine: (test-preload-281181) Getting domain xml...
	I0116 03:24:07.846896  496516 main.go:141] libmachine: (test-preload-281181) Creating domain...
	I0116 03:24:08.184555  496516 main.go:141] libmachine: (test-preload-281181) Waiting to get IP...
	I0116 03:24:08.185708  496516 main.go:141] libmachine: (test-preload-281181) DBG | domain test-preload-281181 has defined MAC address 52:54:00:66:33:72 in network mk-test-preload-281181
	I0116 03:24:08.186185  496516 main.go:141] libmachine: (test-preload-281181) DBG | unable to find current IP address of domain test-preload-281181 in network mk-test-preload-281181
	I0116 03:24:08.186279  496516 main.go:141] libmachine: (test-preload-281181) DBG | I0116 03:24:08.186174  496563 retry.go:31] will retry after 273.905518ms: waiting for machine to come up
	I0116 03:24:08.462117  496516 main.go:141] libmachine: (test-preload-281181) DBG | domain test-preload-281181 has defined MAC address 52:54:00:66:33:72 in network mk-test-preload-281181
	I0116 03:24:08.462691  496516 main.go:141] libmachine: (test-preload-281181) DBG | unable to find current IP address of domain test-preload-281181 in network mk-test-preload-281181
	I0116 03:24:08.462714  496516 main.go:141] libmachine: (test-preload-281181) DBG | I0116 03:24:08.462662  496563 retry.go:31] will retry after 354.410459ms: waiting for machine to come up
	I0116 03:24:08.818251  496516 main.go:141] libmachine: (test-preload-281181) DBG | domain test-preload-281181 has defined MAC address 52:54:00:66:33:72 in network mk-test-preload-281181
	I0116 03:24:08.818649  496516 main.go:141] libmachine: (test-preload-281181) DBG | unable to find current IP address of domain test-preload-281181 in network mk-test-preload-281181
	I0116 03:24:08.818680  496516 main.go:141] libmachine: (test-preload-281181) DBG | I0116 03:24:08.818614  496563 retry.go:31] will retry after 416.065849ms: waiting for machine to come up
	I0116 03:24:09.236389  496516 main.go:141] libmachine: (test-preload-281181) DBG | domain test-preload-281181 has defined MAC address 52:54:00:66:33:72 in network mk-test-preload-281181
	I0116 03:24:09.236922  496516 main.go:141] libmachine: (test-preload-281181) DBG | unable to find current IP address of domain test-preload-281181 in network mk-test-preload-281181
	I0116 03:24:09.236954  496516 main.go:141] libmachine: (test-preload-281181) DBG | I0116 03:24:09.236865  496563 retry.go:31] will retry after 600.617455ms: waiting for machine to come up
	I0116 03:24:09.838922  496516 main.go:141] libmachine: (test-preload-281181) DBG | domain test-preload-281181 has defined MAC address 52:54:00:66:33:72 in network mk-test-preload-281181
	I0116 03:24:09.839503  496516 main.go:141] libmachine: (test-preload-281181) DBG | unable to find current IP address of domain test-preload-281181 in network mk-test-preload-281181
	I0116 03:24:09.839530  496516 main.go:141] libmachine: (test-preload-281181) DBG | I0116 03:24:09.839448  496563 retry.go:31] will retry after 516.390664ms: waiting for machine to come up
	I0116 03:24:10.357175  496516 main.go:141] libmachine: (test-preload-281181) DBG | domain test-preload-281181 has defined MAC address 52:54:00:66:33:72 in network mk-test-preload-281181
	I0116 03:24:10.357676  496516 main.go:141] libmachine: (test-preload-281181) DBG | unable to find current IP address of domain test-preload-281181 in network mk-test-preload-281181
	I0116 03:24:10.357711  496516 main.go:141] libmachine: (test-preload-281181) DBG | I0116 03:24:10.357614  496563 retry.go:31] will retry after 650.589836ms: waiting for machine to come up
	I0116 03:24:11.009830  496516 main.go:141] libmachine: (test-preload-281181) DBG | domain test-preload-281181 has defined MAC address 52:54:00:66:33:72 in network mk-test-preload-281181
	I0116 03:24:11.010284  496516 main.go:141] libmachine: (test-preload-281181) DBG | unable to find current IP address of domain test-preload-281181 in network mk-test-preload-281181
	I0116 03:24:11.010310  496516 main.go:141] libmachine: (test-preload-281181) DBG | I0116 03:24:11.010238  496563 retry.go:31] will retry after 829.211701ms: waiting for machine to come up
	I0116 03:24:11.841474  496516 main.go:141] libmachine: (test-preload-281181) DBG | domain test-preload-281181 has defined MAC address 52:54:00:66:33:72 in network mk-test-preload-281181
	I0116 03:24:11.841998  496516 main.go:141] libmachine: (test-preload-281181) DBG | unable to find current IP address of domain test-preload-281181 in network mk-test-preload-281181
	I0116 03:24:11.842022  496516 main.go:141] libmachine: (test-preload-281181) DBG | I0116 03:24:11.841973  496563 retry.go:31] will retry after 1.445454318s: waiting for machine to come up
	I0116 03:24:13.289085  496516 main.go:141] libmachine: (test-preload-281181) DBG | domain test-preload-281181 has defined MAC address 52:54:00:66:33:72 in network mk-test-preload-281181
	I0116 03:24:13.289455  496516 main.go:141] libmachine: (test-preload-281181) DBG | unable to find current IP address of domain test-preload-281181 in network mk-test-preload-281181
	I0116 03:24:13.289478  496516 main.go:141] libmachine: (test-preload-281181) DBG | I0116 03:24:13.289413  496563 retry.go:31] will retry after 1.135603219s: waiting for machine to come up
	I0116 03:24:14.426839  496516 main.go:141] libmachine: (test-preload-281181) DBG | domain test-preload-281181 has defined MAC address 52:54:00:66:33:72 in network mk-test-preload-281181
	I0116 03:24:14.427283  496516 main.go:141] libmachine: (test-preload-281181) DBG | unable to find current IP address of domain test-preload-281181 in network mk-test-preload-281181
	I0116 03:24:14.427315  496516 main.go:141] libmachine: (test-preload-281181) DBG | I0116 03:24:14.427220  496563 retry.go:31] will retry after 2.065952059s: waiting for machine to come up
	I0116 03:24:16.495475  496516 main.go:141] libmachine: (test-preload-281181) DBG | domain test-preload-281181 has defined MAC address 52:54:00:66:33:72 in network mk-test-preload-281181
	I0116 03:24:16.495874  496516 main.go:141] libmachine: (test-preload-281181) DBG | unable to find current IP address of domain test-preload-281181 in network mk-test-preload-281181
	I0116 03:24:16.495910  496516 main.go:141] libmachine: (test-preload-281181) DBG | I0116 03:24:16.495802  496563 retry.go:31] will retry after 2.406164182s: waiting for machine to come up
	I0116 03:24:18.904175  496516 main.go:141] libmachine: (test-preload-281181) DBG | domain test-preload-281181 has defined MAC address 52:54:00:66:33:72 in network mk-test-preload-281181
	I0116 03:24:18.904622  496516 main.go:141] libmachine: (test-preload-281181) DBG | unable to find current IP address of domain test-preload-281181 in network mk-test-preload-281181
	I0116 03:24:18.904656  496516 main.go:141] libmachine: (test-preload-281181) DBG | I0116 03:24:18.904572  496563 retry.go:31] will retry after 2.615702187s: waiting for machine to come up
	I0116 03:24:21.522275  496516 main.go:141] libmachine: (test-preload-281181) DBG | domain test-preload-281181 has defined MAC address 52:54:00:66:33:72 in network mk-test-preload-281181
	I0116 03:24:21.522768  496516 main.go:141] libmachine: (test-preload-281181) DBG | unable to find current IP address of domain test-preload-281181 in network mk-test-preload-281181
	I0116 03:24:21.522803  496516 main.go:141] libmachine: (test-preload-281181) DBG | I0116 03:24:21.522706  496563 retry.go:31] will retry after 3.844553596s: waiting for machine to come up
	I0116 03:24:25.369100  496516 main.go:141] libmachine: (test-preload-281181) DBG | domain test-preload-281181 has defined MAC address 52:54:00:66:33:72 in network mk-test-preload-281181
	I0116 03:24:25.369599  496516 main.go:141] libmachine: (test-preload-281181) DBG | domain test-preload-281181 has current primary IP address 192.168.39.102 and MAC address 52:54:00:66:33:72 in network mk-test-preload-281181
	I0116 03:24:25.369662  496516 main.go:141] libmachine: (test-preload-281181) Found IP for machine: 192.168.39.102
	I0116 03:24:25.369698  496516 main.go:141] libmachine: (test-preload-281181) Reserving static IP address...
	I0116 03:24:25.370102  496516 main.go:141] libmachine: (test-preload-281181) DBG | found host DHCP lease matching {name: "test-preload-281181", mac: "52:54:00:66:33:72", ip: "192.168.39.102"} in network mk-test-preload-281181: {Iface:virbr1 ExpiryTime:2024-01-16 04:22:16 +0000 UTC Type:0 Mac:52:54:00:66:33:72 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:test-preload-281181 Clientid:01:52:54:00:66:33:72}
	I0116 03:24:25.370128  496516 main.go:141] libmachine: (test-preload-281181) DBG | skip adding static IP to network mk-test-preload-281181 - found existing host DHCP lease matching {name: "test-preload-281181", mac: "52:54:00:66:33:72", ip: "192.168.39.102"}
	I0116 03:24:25.370146  496516 main.go:141] libmachine: (test-preload-281181) Reserved static IP address: 192.168.39.102
	I0116 03:24:25.370162  496516 main.go:141] libmachine: (test-preload-281181) DBG | Getting to WaitForSSH function...
	I0116 03:24:25.370174  496516 main.go:141] libmachine: (test-preload-281181) Waiting for SSH to be available...
	I0116 03:24:25.372582  496516 main.go:141] libmachine: (test-preload-281181) DBG | domain test-preload-281181 has defined MAC address 52:54:00:66:33:72 in network mk-test-preload-281181
	I0116 03:24:25.372925  496516 main.go:141] libmachine: (test-preload-281181) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:33:72", ip: ""} in network mk-test-preload-281181: {Iface:virbr1 ExpiryTime:2024-01-16 04:22:16 +0000 UTC Type:0 Mac:52:54:00:66:33:72 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:test-preload-281181 Clientid:01:52:54:00:66:33:72}
	I0116 03:24:25.372954  496516 main.go:141] libmachine: (test-preload-281181) DBG | domain test-preload-281181 has defined IP address 192.168.39.102 and MAC address 52:54:00:66:33:72 in network mk-test-preload-281181
	I0116 03:24:25.373165  496516 main.go:141] libmachine: (test-preload-281181) DBG | Using SSH client type: external
	I0116 03:24:25.373185  496516 main.go:141] libmachine: (test-preload-281181) DBG | Using SSH private key: /home/jenkins/minikube-integration/17965-468241/.minikube/machines/test-preload-281181/id_rsa (-rw-------)
	I0116 03:24:25.373228  496516 main.go:141] libmachine: (test-preload-281181) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.102 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17965-468241/.minikube/machines/test-preload-281181/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0116 03:24:25.373273  496516 main.go:141] libmachine: (test-preload-281181) DBG | About to run SSH command:
	I0116 03:24:25.373292  496516 main.go:141] libmachine: (test-preload-281181) DBG | exit 0
	I0116 03:24:25.460215  496516 main.go:141] libmachine: (test-preload-281181) DBG | SSH cmd err, output: <nil>: 
	I0116 03:24:25.460635  496516 main.go:141] libmachine: (test-preload-281181) Calling .GetConfigRaw
	I0116 03:24:25.461483  496516 main.go:141] libmachine: (test-preload-281181) Calling .GetIP
	I0116 03:24:25.463944  496516 main.go:141] libmachine: (test-preload-281181) DBG | domain test-preload-281181 has defined MAC address 52:54:00:66:33:72 in network mk-test-preload-281181
	I0116 03:24:25.464252  496516 main.go:141] libmachine: (test-preload-281181) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:33:72", ip: ""} in network mk-test-preload-281181: {Iface:virbr1 ExpiryTime:2024-01-16 04:22:16 +0000 UTC Type:0 Mac:52:54:00:66:33:72 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:test-preload-281181 Clientid:01:52:54:00:66:33:72}
	I0116 03:24:25.464291  496516 main.go:141] libmachine: (test-preload-281181) DBG | domain test-preload-281181 has defined IP address 192.168.39.102 and MAC address 52:54:00:66:33:72 in network mk-test-preload-281181
	I0116 03:24:25.464554  496516 profile.go:148] Saving config to /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/test-preload-281181/config.json ...
	I0116 03:24:25.464751  496516 machine.go:88] provisioning docker machine ...
	I0116 03:24:25.464772  496516 main.go:141] libmachine: (test-preload-281181) Calling .DriverName
	I0116 03:24:25.465013  496516 main.go:141] libmachine: (test-preload-281181) Calling .GetMachineName
	I0116 03:24:25.465183  496516 buildroot.go:166] provisioning hostname "test-preload-281181"
	I0116 03:24:25.465212  496516 main.go:141] libmachine: (test-preload-281181) Calling .GetMachineName
	I0116 03:24:25.465375  496516 main.go:141] libmachine: (test-preload-281181) Calling .GetSSHHostname
	I0116 03:24:25.467312  496516 main.go:141] libmachine: (test-preload-281181) DBG | domain test-preload-281181 has defined MAC address 52:54:00:66:33:72 in network mk-test-preload-281181
	I0116 03:24:25.467605  496516 main.go:141] libmachine: (test-preload-281181) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:33:72", ip: ""} in network mk-test-preload-281181: {Iface:virbr1 ExpiryTime:2024-01-16 04:22:16 +0000 UTC Type:0 Mac:52:54:00:66:33:72 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:test-preload-281181 Clientid:01:52:54:00:66:33:72}
	I0116 03:24:25.467641  496516 main.go:141] libmachine: (test-preload-281181) DBG | domain test-preload-281181 has defined IP address 192.168.39.102 and MAC address 52:54:00:66:33:72 in network mk-test-preload-281181
	I0116 03:24:25.467860  496516 main.go:141] libmachine: (test-preload-281181) Calling .GetSSHPort
	I0116 03:24:25.468090  496516 main.go:141] libmachine: (test-preload-281181) Calling .GetSSHKeyPath
	I0116 03:24:25.468282  496516 main.go:141] libmachine: (test-preload-281181) Calling .GetSSHKeyPath
	I0116 03:24:25.468456  496516 main.go:141] libmachine: (test-preload-281181) Calling .GetSSHUsername
	I0116 03:24:25.468642  496516 main.go:141] libmachine: Using SSH client type: native
	I0116 03:24:25.469031  496516 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0116 03:24:25.469049  496516 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-281181 && echo "test-preload-281181" | sudo tee /etc/hostname
	I0116 03:24:25.597955  496516 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-281181
	
	I0116 03:24:25.597990  496516 main.go:141] libmachine: (test-preload-281181) Calling .GetSSHHostname
	I0116 03:24:25.600786  496516 main.go:141] libmachine: (test-preload-281181) DBG | domain test-preload-281181 has defined MAC address 52:54:00:66:33:72 in network mk-test-preload-281181
	I0116 03:24:25.601232  496516 main.go:141] libmachine: (test-preload-281181) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:33:72", ip: ""} in network mk-test-preload-281181: {Iface:virbr1 ExpiryTime:2024-01-16 04:22:16 +0000 UTC Type:0 Mac:52:54:00:66:33:72 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:test-preload-281181 Clientid:01:52:54:00:66:33:72}
	I0116 03:24:25.601277  496516 main.go:141] libmachine: (test-preload-281181) DBG | domain test-preload-281181 has defined IP address 192.168.39.102 and MAC address 52:54:00:66:33:72 in network mk-test-preload-281181
	I0116 03:24:25.601563  496516 main.go:141] libmachine: (test-preload-281181) Calling .GetSSHPort
	I0116 03:24:25.601781  496516 main.go:141] libmachine: (test-preload-281181) Calling .GetSSHKeyPath
	I0116 03:24:25.601945  496516 main.go:141] libmachine: (test-preload-281181) Calling .GetSSHKeyPath
	I0116 03:24:25.602115  496516 main.go:141] libmachine: (test-preload-281181) Calling .GetSSHUsername
	I0116 03:24:25.602337  496516 main.go:141] libmachine: Using SSH client type: native
	I0116 03:24:25.602679  496516 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0116 03:24:25.602699  496516 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-281181' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-281181/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-281181' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 03:24:25.725980  496516 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 03:24:25.726021  496516 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17965-468241/.minikube CaCertPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17965-468241/.minikube}
	I0116 03:24:25.726046  496516 buildroot.go:174] setting up certificates
	I0116 03:24:25.726056  496516 provision.go:83] configureAuth start
	I0116 03:24:25.726066  496516 main.go:141] libmachine: (test-preload-281181) Calling .GetMachineName
	I0116 03:24:25.726421  496516 main.go:141] libmachine: (test-preload-281181) Calling .GetIP
	I0116 03:24:25.729416  496516 main.go:141] libmachine: (test-preload-281181) DBG | domain test-preload-281181 has defined MAC address 52:54:00:66:33:72 in network mk-test-preload-281181
	I0116 03:24:25.729769  496516 main.go:141] libmachine: (test-preload-281181) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:33:72", ip: ""} in network mk-test-preload-281181: {Iface:virbr1 ExpiryTime:2024-01-16 04:22:16 +0000 UTC Type:0 Mac:52:54:00:66:33:72 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:test-preload-281181 Clientid:01:52:54:00:66:33:72}
	I0116 03:24:25.729801  496516 main.go:141] libmachine: (test-preload-281181) DBG | domain test-preload-281181 has defined IP address 192.168.39.102 and MAC address 52:54:00:66:33:72 in network mk-test-preload-281181
	I0116 03:24:25.729970  496516 main.go:141] libmachine: (test-preload-281181) Calling .GetSSHHostname
	I0116 03:24:25.732193  496516 main.go:141] libmachine: (test-preload-281181) DBG | domain test-preload-281181 has defined MAC address 52:54:00:66:33:72 in network mk-test-preload-281181
	I0116 03:24:25.732522  496516 main.go:141] libmachine: (test-preload-281181) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:33:72", ip: ""} in network mk-test-preload-281181: {Iface:virbr1 ExpiryTime:2024-01-16 04:22:16 +0000 UTC Type:0 Mac:52:54:00:66:33:72 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:test-preload-281181 Clientid:01:52:54:00:66:33:72}
	I0116 03:24:25.732554  496516 main.go:141] libmachine: (test-preload-281181) DBG | domain test-preload-281181 has defined IP address 192.168.39.102 and MAC address 52:54:00:66:33:72 in network mk-test-preload-281181
	I0116 03:24:25.732664  496516 provision.go:138] copyHostCerts
	I0116 03:24:25.732741  496516 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem, removing ...
	I0116 03:24:25.732763  496516 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem
	I0116 03:24:25.732846  496516 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem (1123 bytes)
	I0116 03:24:25.732983  496516 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem, removing ...
	I0116 03:24:25.732999  496516 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem
	I0116 03:24:25.733039  496516 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem (1679 bytes)
	I0116 03:24:25.733109  496516 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem, removing ...
	I0116 03:24:25.733120  496516 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem
	I0116 03:24:25.733152  496516 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem (1078 bytes)
	I0116 03:24:25.733210  496516 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca-key.pem org=jenkins.test-preload-281181 san=[192.168.39.102 192.168.39.102 localhost 127.0.0.1 minikube test-preload-281181]
	I0116 03:24:25.813791  496516 provision.go:172] copyRemoteCerts
	I0116 03:24:25.813862  496516 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 03:24:25.813890  496516 main.go:141] libmachine: (test-preload-281181) Calling .GetSSHHostname
	I0116 03:24:25.816766  496516 main.go:141] libmachine: (test-preload-281181) DBG | domain test-preload-281181 has defined MAC address 52:54:00:66:33:72 in network mk-test-preload-281181
	I0116 03:24:25.817120  496516 main.go:141] libmachine: (test-preload-281181) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:33:72", ip: ""} in network mk-test-preload-281181: {Iface:virbr1 ExpiryTime:2024-01-16 04:22:16 +0000 UTC Type:0 Mac:52:54:00:66:33:72 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:test-preload-281181 Clientid:01:52:54:00:66:33:72}
	I0116 03:24:25.817165  496516 main.go:141] libmachine: (test-preload-281181) DBG | domain test-preload-281181 has defined IP address 192.168.39.102 and MAC address 52:54:00:66:33:72 in network mk-test-preload-281181
	I0116 03:24:25.817316  496516 main.go:141] libmachine: (test-preload-281181) Calling .GetSSHPort
	I0116 03:24:25.817494  496516 main.go:141] libmachine: (test-preload-281181) Calling .GetSSHKeyPath
	I0116 03:24:25.817717  496516 main.go:141] libmachine: (test-preload-281181) Calling .GetSSHUsername
	I0116 03:24:25.817909  496516 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/test-preload-281181/id_rsa Username:docker}
	I0116 03:24:25.905384  496516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0116 03:24:25.930802  496516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0116 03:24:25.954734  496516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0116 03:24:25.977851  496516 provision.go:86] duration metric: configureAuth took 251.780388ms
	I0116 03:24:25.977880  496516 buildroot.go:189] setting minikube options for container-runtime
	I0116 03:24:25.978088  496516 config.go:182] Loaded profile config "test-preload-281181": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0116 03:24:25.978214  496516 main.go:141] libmachine: (test-preload-281181) Calling .GetSSHHostname
	I0116 03:24:25.981018  496516 main.go:141] libmachine: (test-preload-281181) DBG | domain test-preload-281181 has defined MAC address 52:54:00:66:33:72 in network mk-test-preload-281181
	I0116 03:24:25.981365  496516 main.go:141] libmachine: (test-preload-281181) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:33:72", ip: ""} in network mk-test-preload-281181: {Iface:virbr1 ExpiryTime:2024-01-16 04:22:16 +0000 UTC Type:0 Mac:52:54:00:66:33:72 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:test-preload-281181 Clientid:01:52:54:00:66:33:72}
	I0116 03:24:25.981405  496516 main.go:141] libmachine: (test-preload-281181) DBG | domain test-preload-281181 has defined IP address 192.168.39.102 and MAC address 52:54:00:66:33:72 in network mk-test-preload-281181
	I0116 03:24:25.981594  496516 main.go:141] libmachine: (test-preload-281181) Calling .GetSSHPort
	I0116 03:24:25.981771  496516 main.go:141] libmachine: (test-preload-281181) Calling .GetSSHKeyPath
	I0116 03:24:25.981917  496516 main.go:141] libmachine: (test-preload-281181) Calling .GetSSHKeyPath
	I0116 03:24:25.982046  496516 main.go:141] libmachine: (test-preload-281181) Calling .GetSSHUsername
	I0116 03:24:25.982266  496516 main.go:141] libmachine: Using SSH client type: native
	I0116 03:24:25.982583  496516 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0116 03:24:25.982599  496516 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 03:24:26.303030  496516 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 03:24:26.303058  496516 machine.go:91] provisioned docker machine in 838.292614ms
	I0116 03:24:26.303070  496516 start.go:300] post-start starting for "test-preload-281181" (driver="kvm2")
	I0116 03:24:26.303087  496516 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 03:24:26.303117  496516 main.go:141] libmachine: (test-preload-281181) Calling .DriverName
	I0116 03:24:26.303595  496516 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 03:24:26.303627  496516 main.go:141] libmachine: (test-preload-281181) Calling .GetSSHHostname
	I0116 03:24:26.306540  496516 main.go:141] libmachine: (test-preload-281181) DBG | domain test-preload-281181 has defined MAC address 52:54:00:66:33:72 in network mk-test-preload-281181
	I0116 03:24:26.307042  496516 main.go:141] libmachine: (test-preload-281181) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:33:72", ip: ""} in network mk-test-preload-281181: {Iface:virbr1 ExpiryTime:2024-01-16 04:22:16 +0000 UTC Type:0 Mac:52:54:00:66:33:72 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:test-preload-281181 Clientid:01:52:54:00:66:33:72}
	I0116 03:24:26.307071  496516 main.go:141] libmachine: (test-preload-281181) DBG | domain test-preload-281181 has defined IP address 192.168.39.102 and MAC address 52:54:00:66:33:72 in network mk-test-preload-281181
	I0116 03:24:26.307203  496516 main.go:141] libmachine: (test-preload-281181) Calling .GetSSHPort
	I0116 03:24:26.307440  496516 main.go:141] libmachine: (test-preload-281181) Calling .GetSSHKeyPath
	I0116 03:24:26.307625  496516 main.go:141] libmachine: (test-preload-281181) Calling .GetSSHUsername
	I0116 03:24:26.307777  496516 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/test-preload-281181/id_rsa Username:docker}
	I0116 03:24:26.394230  496516 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 03:24:26.398799  496516 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 03:24:26.398829  496516 filesync.go:126] Scanning /home/jenkins/minikube-integration/17965-468241/.minikube/addons for local assets ...
	I0116 03:24:26.398937  496516 filesync.go:126] Scanning /home/jenkins/minikube-integration/17965-468241/.minikube/files for local assets ...
	I0116 03:24:26.399025  496516 filesync.go:149] local asset: /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem -> 4754782.pem in /etc/ssl/certs
	I0116 03:24:26.399130  496516 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 03:24:26.408229  496516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem --> /etc/ssl/certs/4754782.pem (1708 bytes)
	I0116 03:24:26.433574  496516 start.go:303] post-start completed in 130.486826ms
	I0116 03:24:26.433606  496516 fix.go:56] fixHost completed within 18.612756228s
	I0116 03:24:26.433636  496516 main.go:141] libmachine: (test-preload-281181) Calling .GetSSHHostname
	I0116 03:24:26.436468  496516 main.go:141] libmachine: (test-preload-281181) DBG | domain test-preload-281181 has defined MAC address 52:54:00:66:33:72 in network mk-test-preload-281181
	I0116 03:24:26.436866  496516 main.go:141] libmachine: (test-preload-281181) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:33:72", ip: ""} in network mk-test-preload-281181: {Iface:virbr1 ExpiryTime:2024-01-16 04:22:16 +0000 UTC Type:0 Mac:52:54:00:66:33:72 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:test-preload-281181 Clientid:01:52:54:00:66:33:72}
	I0116 03:24:26.436900  496516 main.go:141] libmachine: (test-preload-281181) DBG | domain test-preload-281181 has defined IP address 192.168.39.102 and MAC address 52:54:00:66:33:72 in network mk-test-preload-281181
	I0116 03:24:26.437113  496516 main.go:141] libmachine: (test-preload-281181) Calling .GetSSHPort
	I0116 03:24:26.437380  496516 main.go:141] libmachine: (test-preload-281181) Calling .GetSSHKeyPath
	I0116 03:24:26.437590  496516 main.go:141] libmachine: (test-preload-281181) Calling .GetSSHKeyPath
	I0116 03:24:26.437732  496516 main.go:141] libmachine: (test-preload-281181) Calling .GetSSHUsername
	I0116 03:24:26.437932  496516 main.go:141] libmachine: Using SSH client type: native
	I0116 03:24:26.438334  496516 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0116 03:24:26.438354  496516 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0116 03:24:26.553034  496516 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705375466.501105941
	
	I0116 03:24:26.553059  496516 fix.go:206] guest clock: 1705375466.501105941
	I0116 03:24:26.553067  496516 fix.go:219] Guest: 2024-01-16 03:24:26.501105941 +0000 UTC Remote: 2024-01-16 03:24:26.433611609 +0000 UTC m=+23.579123808 (delta=67.494332ms)
	I0116 03:24:26.553089  496516 fix.go:190] guest clock delta is within tolerance: 67.494332ms
	I0116 03:24:26.553094  496516 start.go:83] releasing machines lock for "test-preload-281181", held for 18.732267721s
	I0116 03:24:26.553116  496516 main.go:141] libmachine: (test-preload-281181) Calling .DriverName
	I0116 03:24:26.553427  496516 main.go:141] libmachine: (test-preload-281181) Calling .GetIP
	I0116 03:24:26.556086  496516 main.go:141] libmachine: (test-preload-281181) DBG | domain test-preload-281181 has defined MAC address 52:54:00:66:33:72 in network mk-test-preload-281181
	I0116 03:24:26.556493  496516 main.go:141] libmachine: (test-preload-281181) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:33:72", ip: ""} in network mk-test-preload-281181: {Iface:virbr1 ExpiryTime:2024-01-16 04:22:16 +0000 UTC Type:0 Mac:52:54:00:66:33:72 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:test-preload-281181 Clientid:01:52:54:00:66:33:72}
	I0116 03:24:26.556525  496516 main.go:141] libmachine: (test-preload-281181) DBG | domain test-preload-281181 has defined IP address 192.168.39.102 and MAC address 52:54:00:66:33:72 in network mk-test-preload-281181
	I0116 03:24:26.556715  496516 main.go:141] libmachine: (test-preload-281181) Calling .DriverName
	I0116 03:24:26.557261  496516 main.go:141] libmachine: (test-preload-281181) Calling .DriverName
	I0116 03:24:26.557508  496516 main.go:141] libmachine: (test-preload-281181) Calling .DriverName
	I0116 03:24:26.557599  496516 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 03:24:26.557647  496516 main.go:141] libmachine: (test-preload-281181) Calling .GetSSHHostname
	I0116 03:24:26.557778  496516 ssh_runner.go:195] Run: cat /version.json
	I0116 03:24:26.557806  496516 main.go:141] libmachine: (test-preload-281181) Calling .GetSSHHostname
	I0116 03:24:26.560346  496516 main.go:141] libmachine: (test-preload-281181) DBG | domain test-preload-281181 has defined MAC address 52:54:00:66:33:72 in network mk-test-preload-281181
	I0116 03:24:26.560787  496516 main.go:141] libmachine: (test-preload-281181) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:33:72", ip: ""} in network mk-test-preload-281181: {Iface:virbr1 ExpiryTime:2024-01-16 04:22:16 +0000 UTC Type:0 Mac:52:54:00:66:33:72 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:test-preload-281181 Clientid:01:52:54:00:66:33:72}
	I0116 03:24:26.560821  496516 main.go:141] libmachine: (test-preload-281181) DBG | domain test-preload-281181 has defined MAC address 52:54:00:66:33:72 in network mk-test-preload-281181
	I0116 03:24:26.560847  496516 main.go:141] libmachine: (test-preload-281181) DBG | domain test-preload-281181 has defined IP address 192.168.39.102 and MAC address 52:54:00:66:33:72 in network mk-test-preload-281181
	I0116 03:24:26.560919  496516 main.go:141] libmachine: (test-preload-281181) Calling .GetSSHPort
	I0116 03:24:26.561119  496516 main.go:141] libmachine: (test-preload-281181) Calling .GetSSHKeyPath
	I0116 03:24:26.561197  496516 main.go:141] libmachine: (test-preload-281181) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:33:72", ip: ""} in network mk-test-preload-281181: {Iface:virbr1 ExpiryTime:2024-01-16 04:22:16 +0000 UTC Type:0 Mac:52:54:00:66:33:72 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:test-preload-281181 Clientid:01:52:54:00:66:33:72}
	I0116 03:24:26.561228  496516 main.go:141] libmachine: (test-preload-281181) DBG | domain test-preload-281181 has defined IP address 192.168.39.102 and MAC address 52:54:00:66:33:72 in network mk-test-preload-281181
	I0116 03:24:26.561302  496516 main.go:141] libmachine: (test-preload-281181) Calling .GetSSHUsername
	I0116 03:24:26.561390  496516 main.go:141] libmachine: (test-preload-281181) Calling .GetSSHPort
	I0116 03:24:26.561489  496516 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/test-preload-281181/id_rsa Username:docker}
	I0116 03:24:26.561558  496516 main.go:141] libmachine: (test-preload-281181) Calling .GetSSHKeyPath
	I0116 03:24:26.561687  496516 main.go:141] libmachine: (test-preload-281181) Calling .GetSSHUsername
	I0116 03:24:26.561864  496516 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/test-preload-281181/id_rsa Username:docker}
	I0116 03:24:26.654535  496516 ssh_runner.go:195] Run: systemctl --version
	I0116 03:24:26.680135  496516 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 03:24:26.822835  496516 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0116 03:24:26.828978  496516 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 03:24:26.829051  496516 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 03:24:26.845769  496516 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0116 03:24:26.845801  496516 start.go:475] detecting cgroup driver to use...
	I0116 03:24:26.845873  496516 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 03:24:26.859684  496516 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 03:24:26.873356  496516 docker.go:217] disabling cri-docker service (if available) ...
	I0116 03:24:26.873448  496516 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 03:24:26.889772  496516 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 03:24:26.903699  496516 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 03:24:27.008149  496516 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 03:24:27.123495  496516 docker.go:233] disabling docker service ...
	I0116 03:24:27.123594  496516 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 03:24:27.138164  496516 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 03:24:27.150732  496516 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 03:24:27.253990  496516 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 03:24:27.362181  496516 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 03:24:27.375659  496516 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 03:24:27.394201  496516 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0116 03:24:27.394264  496516 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:24:27.405170  496516 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 03:24:27.405238  496516 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:24:27.416424  496516 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:24:27.428122  496516 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:24:27.439094  496516 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 03:24:27.450090  496516 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 03:24:27.461018  496516 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0116 03:24:27.461074  496516 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0116 03:24:27.476856  496516 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 03:24:27.486570  496516 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 03:24:27.611661  496516 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 03:24:27.783261  496516 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 03:24:27.783343  496516 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 03:24:27.788662  496516 start.go:543] Will wait 60s for crictl version
	I0116 03:24:27.788731  496516 ssh_runner.go:195] Run: which crictl
	I0116 03:24:27.792991  496516 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 03:24:27.834081  496516 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0116 03:24:27.834253  496516 ssh_runner.go:195] Run: crio --version
	I0116 03:24:27.885316  496516 ssh_runner.go:195] Run: crio --version
	I0116 03:24:27.933273  496516 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.24.1 ...
	I0116 03:24:27.935837  496516 main.go:141] libmachine: (test-preload-281181) Calling .GetIP
	I0116 03:24:27.938628  496516 main.go:141] libmachine: (test-preload-281181) DBG | domain test-preload-281181 has defined MAC address 52:54:00:66:33:72 in network mk-test-preload-281181
	I0116 03:24:27.938988  496516 main.go:141] libmachine: (test-preload-281181) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:33:72", ip: ""} in network mk-test-preload-281181: {Iface:virbr1 ExpiryTime:2024-01-16 04:22:16 +0000 UTC Type:0 Mac:52:54:00:66:33:72 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:test-preload-281181 Clientid:01:52:54:00:66:33:72}
	I0116 03:24:27.939027  496516 main.go:141] libmachine: (test-preload-281181) DBG | domain test-preload-281181 has defined IP address 192.168.39.102 and MAC address 52:54:00:66:33:72 in network mk-test-preload-281181
	I0116 03:24:27.939220  496516 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0116 03:24:27.943765  496516 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 03:24:27.957479  496516 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0116 03:24:27.957571  496516 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 03:24:27.998561  496516 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0116 03:24:27.998649  496516 ssh_runner.go:195] Run: which lz4
	I0116 03:24:28.003068  496516 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0116 03:24:28.007814  496516 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0116 03:24:28.007850  496516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0116 03:24:29.925462  496516 crio.go:444] Took 1.922430 seconds to copy over tarball
	I0116 03:24:29.925546  496516 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0116 03:24:32.988223  496516 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.062640966s)
	I0116 03:24:32.988264  496516 crio.go:451] Took 3.062769 seconds to extract the tarball
	I0116 03:24:32.988278  496516 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0116 03:24:33.029491  496516 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 03:24:33.084164  496516 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0116 03:24:33.084192  496516 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0116 03:24:33.084248  496516 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:24:33.084320  496516 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0116 03:24:33.084343  496516 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0116 03:24:33.084409  496516 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0116 03:24:33.084433  496516 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0116 03:24:33.084470  496516 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0116 03:24:33.084494  496516 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0116 03:24:33.084627  496516 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0116 03:24:33.085754  496516 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0116 03:24:33.085766  496516 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0116 03:24:33.085786  496516 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0116 03:24:33.085805  496516 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0116 03:24:33.085813  496516 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:24:33.085827  496516 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0116 03:24:33.085758  496516 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0116 03:24:33.085996  496516 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0116 03:24:33.257769  496516 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0116 03:24:33.260921  496516 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0116 03:24:33.264285  496516 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0116 03:24:33.268916  496516 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0116 03:24:33.279068  496516 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0116 03:24:33.281266  496516 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0116 03:24:33.316690  496516 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0116 03:24:33.370594  496516 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0116 03:24:33.370666  496516 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0116 03:24:33.370723  496516 ssh_runner.go:195] Run: which crictl
	I0116 03:24:33.399116  496516 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:24:33.409777  496516 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0116 03:24:33.409829  496516 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0116 03:24:33.409850  496516 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0116 03:24:33.409883  496516 ssh_runner.go:195] Run: which crictl
	I0116 03:24:33.409890  496516 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0116 03:24:33.409946  496516 ssh_runner.go:195] Run: which crictl
	I0116 03:24:33.464181  496516 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0116 03:24:33.464243  496516 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0116 03:24:33.464306  496516 ssh_runner.go:195] Run: which crictl
	I0116 03:24:33.484018  496516 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0116 03:24:33.484097  496516 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0116 03:24:33.484162  496516 ssh_runner.go:195] Run: which crictl
	I0116 03:24:33.485528  496516 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0116 03:24:33.485579  496516 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0116 03:24:33.485615  496516 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0116 03:24:33.485634  496516 ssh_runner.go:195] Run: which crictl
	I0116 03:24:33.485653  496516 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0116 03:24:33.485683  496516 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0116 03:24:33.485701  496516 ssh_runner.go:195] Run: which crictl
	I0116 03:24:33.607816  496516 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0116 03:24:33.607856  496516 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0116 03:24:33.607907  496516 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0116 03:24:33.607945  496516 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0116 03:24:33.608008  496516 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0116 03:24:33.608084  496516 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0116 03:24:33.608140  496516 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0116 03:24:33.608224  496516 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0116 03:24:33.769232  496516 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0116 03:24:33.769295  496516 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0116 03:24:33.769333  496516 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0116 03:24:33.769372  496516 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0116 03:24:33.769396  496516 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0116 03:24:33.769469  496516 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0116 03:24:33.769471  496516 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0116 03:24:33.769524  496516 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0116 03:24:33.769535  496516 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0116 03:24:33.769577  496516 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0116 03:24:33.769599  496516 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0116 03:24:33.769629  496516 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0116 03:24:33.769644  496516 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.7
	I0116 03:24:33.769650  496516 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0116 03:24:33.769673  496516 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0116 03:24:36.843067  496516 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.7: (3.07335883s)
	I0116 03:24:36.843116  496516 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0116 03:24:36.843126  496516 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0: (3.073586616s)
	I0116 03:24:36.843161  496516 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0116 03:24:36.843169  496516 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0116 03:24:36.843209  496516 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.24.4: (3.073655213s)
	I0116 03:24:36.843231  496516 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0116 03:24:36.843240  496516 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0116 03:24:36.843266  496516 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.24.4: (3.073597815s)
	I0116 03:24:36.843294  496516 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0116 03:24:36.843333  496516 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: (3.073717768s)
	I0116 03:24:36.843363  496516 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0116 03:24:36.843371  496516 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.24.4: (3.074023304s)
	I0116 03:24:36.843386  496516 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0116 03:24:36.843421  496516 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.24.4: (3.074010337s)
	I0116 03:24:36.843446  496516 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0116 03:24:39.095268  496516 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.252006173s)
	I0116 03:24:39.095304  496516 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0116 03:24:39.095330  496516 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0116 03:24:39.095374  496516 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0116 03:24:39.540959  496516 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0116 03:24:39.541017  496516 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0116 03:24:39.541077  496516 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0116 03:24:40.389735  496516 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0116 03:24:40.389795  496516 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0116 03:24:40.389865  496516 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0116 03:24:40.839400  496516 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0116 03:24:40.839452  496516 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0116 03:24:40.839497  496516 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0116 03:24:41.594376  496516 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0116 03:24:41.594432  496516 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0116 03:24:41.594545  496516 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0116 03:24:42.450593  496516 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0116 03:24:42.450653  496516 cache_images.go:123] Successfully loaded all cached images
	I0116 03:24:42.450661  496516 cache_images.go:92] LoadImages completed in 9.366457644s
	I0116 03:24:42.450743  496516 ssh_runner.go:195] Run: crio config
	I0116 03:24:42.518850  496516 cni.go:84] Creating CNI manager for ""
	I0116 03:24:42.518880  496516 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:24:42.518918  496516 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 03:24:42.518947  496516 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.102 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-281181 NodeName:test-preload-281181 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.102"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.102 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0116 03:24:42.519134  496516 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.102
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-281181"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.102
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.102"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 03:24:42.519239  496516 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-281181 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.102
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-281181 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0116 03:24:42.519315  496516 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0116 03:24:42.528320  496516 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 03:24:42.528428  496516 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0116 03:24:42.536884  496516 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0116 03:24:42.553144  496516 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0116 03:24:42.569325  496516 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0116 03:24:42.586739  496516 ssh_runner.go:195] Run: grep 192.168.39.102	control-plane.minikube.internal$ /etc/hosts
	I0116 03:24:42.590880  496516 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.102	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 03:24:42.604295  496516 certs.go:56] Setting up /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/test-preload-281181 for IP: 192.168.39.102
	I0116 03:24:42.604337  496516 certs.go:190] acquiring lock for shared ca certs: {Name:mkab5aeea3e0ed69f3592dc8f418e07c6075b130 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:24:42.604532  496516 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17965-468241/.minikube/ca.key
	I0116 03:24:42.604591  496516 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.key
	I0116 03:24:42.604679  496516 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/test-preload-281181/client.key
	I0116 03:24:42.604863  496516 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/test-preload-281181/apiserver.key.a5c55091
	I0116 03:24:42.604960  496516 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/test-preload-281181/proxy-client.key
	I0116 03:24:42.605114  496516 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478.pem (1338 bytes)
	W0116 03:24:42.605173  496516 certs.go:433] ignoring /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478_empty.pem, impossibly tiny 0 bytes
	I0116 03:24:42.605188  496516 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca-key.pem (1679 bytes)
	I0116 03:24:42.605232  496516 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem (1078 bytes)
	I0116 03:24:42.605288  496516 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem (1123 bytes)
	I0116 03:24:42.605331  496516 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem (1679 bytes)
	I0116 03:24:42.605400  496516 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem (1708 bytes)
	I0116 03:24:42.606163  496516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/test-preload-281181/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0116 03:24:42.631095  496516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/test-preload-281181/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0116 03:24:42.655226  496516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/test-preload-281181/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0116 03:24:42.679702  496516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/test-preload-281181/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0116 03:24:42.704305  496516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 03:24:42.728473  496516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0116 03:24:42.752857  496516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 03:24:42.776749  496516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 03:24:42.800887  496516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem --> /usr/share/ca-certificates/4754782.pem (1708 bytes)
	I0116 03:24:42.824924  496516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 03:24:42.849516  496516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478.pem --> /usr/share/ca-certificates/475478.pem (1338 bytes)
	I0116 03:24:42.873247  496516 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0116 03:24:42.891675  496516 ssh_runner.go:195] Run: openssl version
	I0116 03:24:42.897392  496516 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4754782.pem && ln -fs /usr/share/ca-certificates/4754782.pem /etc/ssl/certs/4754782.pem"
	I0116 03:24:42.907918  496516 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4754782.pem
	I0116 03:24:42.913029  496516 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 02:44 /usr/share/ca-certificates/4754782.pem
	I0116 03:24:42.913124  496516 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4754782.pem
	I0116 03:24:42.919047  496516 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4754782.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 03:24:42.929944  496516 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 03:24:42.940133  496516 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:24:42.945321  496516 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 02:35 /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:24:42.945404  496516 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:24:42.951308  496516 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 03:24:42.961300  496516 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/475478.pem && ln -fs /usr/share/ca-certificates/475478.pem /etc/ssl/certs/475478.pem"
	I0116 03:24:42.971330  496516 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/475478.pem
	I0116 03:24:42.976273  496516 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 02:44 /usr/share/ca-certificates/475478.pem
	I0116 03:24:42.976338  496516 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/475478.pem
	I0116 03:24:42.982272  496516 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/475478.pem /etc/ssl/certs/51391683.0"
	I0116 03:24:42.992502  496516 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 03:24:42.997307  496516 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0116 03:24:43.003799  496516 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0116 03:24:43.010104  496516 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0116 03:24:43.016227  496516 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0116 03:24:43.022257  496516 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0116 03:24:43.028204  496516 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0116 03:24:43.034113  496516 kubeadm.go:404] StartCluster: {Name:test-preload-281181 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVers
ion:v1.24.4 ClusterName:test-preload-281181 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 03:24:43.034225  496516 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0116 03:24:43.034278  496516 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 03:24:43.070656  496516 cri.go:89] found id: ""
	I0116 03:24:43.070752  496516 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0116 03:24:43.079974  496516 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0116 03:24:43.080002  496516 kubeadm.go:636] restartCluster start
	I0116 03:24:43.080074  496516 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0116 03:24:43.088325  496516 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:24:43.088825  496516 kubeconfig.go:135] verify returned: extract IP: "test-preload-281181" does not appear in /home/jenkins/minikube-integration/17965-468241/kubeconfig
	I0116 03:24:43.088943  496516 kubeconfig.go:146] "test-preload-281181" context is missing from /home/jenkins/minikube-integration/17965-468241/kubeconfig - will repair!
	I0116 03:24:43.089533  496516 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-468241/kubeconfig: {Name:mkac3c63bbcba9b4fa1bea3480797757d703fb9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:24:43.090358  496516 kapi.go:59] client config for test-preload-281181: &rest.Config{Host:"https://192.168.39.102:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17965-468241/.minikube/profiles/test-preload-281181/client.crt", KeyFile:"/home/jenkins/minikube-integration/17965-468241/.minikube/profiles/test-preload-281181/client.key", CAFile:"/home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19be0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0116 03:24:43.091456  496516 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0116 03:24:43.099943  496516 api_server.go:166] Checking apiserver status ...
	I0116 03:24:43.100001  496516 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:24:43.110958  496516 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:24:43.600461  496516 api_server.go:166] Checking apiserver status ...
	I0116 03:24:43.600548  496516 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:24:43.612692  496516 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:24:44.100258  496516 api_server.go:166] Checking apiserver status ...
	I0116 03:24:44.100345  496516 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:24:44.112341  496516 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:24:44.601015  496516 api_server.go:166] Checking apiserver status ...
	I0116 03:24:44.601120  496516 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:24:44.613950  496516 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:24:45.100481  496516 api_server.go:166] Checking apiserver status ...
	I0116 03:24:45.100607  496516 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:24:45.113437  496516 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:24:45.600066  496516 api_server.go:166] Checking apiserver status ...
	I0116 03:24:45.600172  496516 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:24:45.612051  496516 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:24:46.100663  496516 api_server.go:166] Checking apiserver status ...
	I0116 03:24:46.100791  496516 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:24:46.113213  496516 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:24:46.600858  496516 api_server.go:166] Checking apiserver status ...
	I0116 03:24:46.600985  496516 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:24:46.613244  496516 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:24:47.100902  496516 api_server.go:166] Checking apiserver status ...
	I0116 03:24:47.101012  496516 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:24:47.112886  496516 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:24:47.600433  496516 api_server.go:166] Checking apiserver status ...
	I0116 03:24:47.600537  496516 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:24:47.612178  496516 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:24:48.100965  496516 api_server.go:166] Checking apiserver status ...
	I0116 03:24:48.101113  496516 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:24:48.113087  496516 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:24:48.600786  496516 api_server.go:166] Checking apiserver status ...
	I0116 03:24:48.600874  496516 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:24:48.612928  496516 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:24:49.100449  496516 api_server.go:166] Checking apiserver status ...
	I0116 03:24:49.100535  496516 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:24:49.112344  496516 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:24:49.600973  496516 api_server.go:166] Checking apiserver status ...
	I0116 03:24:49.601090  496516 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:24:49.613042  496516 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:24:50.100738  496516 api_server.go:166] Checking apiserver status ...
	I0116 03:24:50.100829  496516 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:24:50.112835  496516 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:24:50.600380  496516 api_server.go:166] Checking apiserver status ...
	I0116 03:24:50.600501  496516 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:24:50.612594  496516 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:24:51.100132  496516 api_server.go:166] Checking apiserver status ...
	I0116 03:24:51.100247  496516 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:24:51.112440  496516 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:24:51.600138  496516 api_server.go:166] Checking apiserver status ...
	I0116 03:24:51.600249  496516 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:24:51.613354  496516 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:24:52.100990  496516 api_server.go:166] Checking apiserver status ...
	I0116 03:24:52.101139  496516 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:24:52.114709  496516 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:24:52.600171  496516 api_server.go:166] Checking apiserver status ...
	I0116 03:24:52.600292  496516 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:24:52.612684  496516 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:24:53.100796  496516 api_server.go:166] Checking apiserver status ...
	I0116 03:24:53.100901  496516 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:24:53.112641  496516 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:24:53.112682  496516 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0116 03:24:53.112705  496516 kubeadm.go:1135] stopping kube-system containers ...
	I0116 03:24:53.112718  496516 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0116 03:24:53.112791  496516 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 03:24:53.154693  496516 cri.go:89] found id: ""
	I0116 03:24:53.154782  496516 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0116 03:24:53.170743  496516 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 03:24:53.181865  496516 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 03:24:53.181935  496516 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 03:24:53.191278  496516 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0116 03:24:53.191312  496516 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:24:53.310598  496516 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:24:53.989770  496516 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:24:54.341496  496516 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:24:54.436710  496516 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:24:54.516765  496516 api_server.go:52] waiting for apiserver process to appear ...
	I0116 03:24:54.516853  496516 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:24:55.017910  496516 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:24:55.517743  496516 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:24:56.017962  496516 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:24:56.517944  496516 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:24:57.016996  496516 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:24:57.038179  496516 api_server.go:72] duration metric: took 2.521414896s to wait for apiserver process to appear ...
	I0116 03:24:57.038209  496516 api_server.go:88] waiting for apiserver healthz status ...
	I0116 03:24:57.038233  496516 api_server.go:253] Checking apiserver healthz at https://192.168.39.102:8443/healthz ...
	I0116 03:25:01.876892  496516 api_server.go:279] https://192.168.39.102:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 03:25:01.876956  496516 api_server.go:103] status: https://192.168.39.102:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 03:25:01.876981  496516 api_server.go:253] Checking apiserver healthz at https://192.168.39.102:8443/healthz ...
	I0116 03:25:01.947039  496516 api_server.go:279] https://192.168.39.102:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 03:25:01.947080  496516 api_server.go:103] status: https://192.168.39.102:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 03:25:02.038316  496516 api_server.go:253] Checking apiserver healthz at https://192.168.39.102:8443/healthz ...
	I0116 03:25:02.056778  496516 api_server.go:279] https://192.168.39.102:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0116 03:25:02.056825  496516 api_server.go:103] status: https://192.168.39.102:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0116 03:25:02.538389  496516 api_server.go:253] Checking apiserver healthz at https://192.168.39.102:8443/healthz ...
	I0116 03:25:02.544631  496516 api_server.go:279] https://192.168.39.102:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0116 03:25:02.544671  496516 api_server.go:103] status: https://192.168.39.102:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0116 03:25:03.038709  496516 api_server.go:253] Checking apiserver healthz at https://192.168.39.102:8443/healthz ...
	I0116 03:25:03.045376  496516 api_server.go:279] https://192.168.39.102:8443/healthz returned 200:
	ok
	I0116 03:25:03.054134  496516 api_server.go:141] control plane version: v1.24.4
	I0116 03:25:03.054166  496516 api_server.go:131] duration metric: took 6.015949572s to wait for apiserver health ...
	I0116 03:25:03.054179  496516 cni.go:84] Creating CNI manager for ""
	I0116 03:25:03.054187  496516 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:25:03.058135  496516 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 03:25:03.059966  496516 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 03:25:03.071263  496516 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 03:25:03.118226  496516 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 03:25:03.128709  496516 system_pods.go:59] 7 kube-system pods found
	I0116 03:25:03.128751  496516 system_pods.go:61] "coredns-6d4b75cb6d-xmlxl" [b8b58db1-b7c3-4712-b34d-3bb3b260231e] Running
	I0116 03:25:03.128766  496516 system_pods.go:61] "etcd-test-preload-281181" [fbc5cacb-57f8-4e8b-83cd-2b44fa5241d9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0116 03:25:03.128776  496516 system_pods.go:61] "kube-apiserver-test-preload-281181" [401e5c9d-c3f0-4413-ae47-8d37ac5d3a19] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0116 03:25:03.128804  496516 system_pods.go:61] "kube-controller-manager-test-preload-281181" [7a6f36f2-c6a0-4c28-b6b4-63842a1c38b8] Running
	I0116 03:25:03.128811  496516 system_pods.go:61] "kube-proxy-tsn82" [44643e28-4e07-4551-bb4e-339b66ff612e] Running
	I0116 03:25:03.128816  496516 system_pods.go:61] "kube-scheduler-test-preload-281181" [59d3b6f4-c519-4903-8d4c-22b6523069e2] Running
	I0116 03:25:03.128823  496516 system_pods.go:61] "storage-provisioner" [ed27080f-9b99-4e19-9103-c2668a2821dd] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0116 03:25:03.128834  496516 system_pods.go:74] duration metric: took 10.581175ms to wait for pod list to return data ...
	I0116 03:25:03.128845  496516 node_conditions.go:102] verifying NodePressure condition ...
	I0116 03:25:03.140418  496516 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 03:25:03.140454  496516 node_conditions.go:123] node cpu capacity is 2
	I0116 03:25:03.140493  496516 node_conditions.go:105] duration metric: took 11.641346ms to run NodePressure ...
	I0116 03:25:03.140531  496516 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:25:03.455180  496516 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0116 03:25:03.460428  496516 kubeadm.go:787] kubelet initialised
	I0116 03:25:03.460451  496516 kubeadm.go:788] duration metric: took 5.238843ms waiting for restarted kubelet to initialise ...
	I0116 03:25:03.460459  496516 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:25:03.466071  496516 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-xmlxl" in "kube-system" namespace to be "Ready" ...
	I0116 03:25:03.473966  496516 pod_ready.go:97] node "test-preload-281181" hosting pod "coredns-6d4b75cb6d-xmlxl" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-281181" has status "Ready":"False"
	I0116 03:25:03.473999  496516 pod_ready.go:81] duration metric: took 7.899102ms waiting for pod "coredns-6d4b75cb6d-xmlxl" in "kube-system" namespace to be "Ready" ...
	E0116 03:25:03.474008  496516 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-281181" hosting pod "coredns-6d4b75cb6d-xmlxl" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-281181" has status "Ready":"False"
	I0116 03:25:03.474015  496516 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-281181" in "kube-system" namespace to be "Ready" ...
	I0116 03:25:03.482383  496516 pod_ready.go:97] node "test-preload-281181" hosting pod "etcd-test-preload-281181" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-281181" has status "Ready":"False"
	I0116 03:25:03.482414  496516 pod_ready.go:81] duration metric: took 8.390425ms waiting for pod "etcd-test-preload-281181" in "kube-system" namespace to be "Ready" ...
	E0116 03:25:03.482426  496516 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-281181" hosting pod "etcd-test-preload-281181" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-281181" has status "Ready":"False"
	I0116 03:25:03.482433  496516 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-test-preload-281181" in "kube-system" namespace to be "Ready" ...
	I0116 03:25:03.491348  496516 pod_ready.go:97] node "test-preload-281181" hosting pod "kube-apiserver-test-preload-281181" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-281181" has status "Ready":"False"
	I0116 03:25:03.491383  496516 pod_ready.go:81] duration metric: took 8.93598ms waiting for pod "kube-apiserver-test-preload-281181" in "kube-system" namespace to be "Ready" ...
	E0116 03:25:03.491396  496516 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-281181" hosting pod "kube-apiserver-test-preload-281181" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-281181" has status "Ready":"False"
	I0116 03:25:03.491404  496516 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-test-preload-281181" in "kube-system" namespace to be "Ready" ...
	I0116 03:25:03.528203  496516 pod_ready.go:97] node "test-preload-281181" hosting pod "kube-controller-manager-test-preload-281181" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-281181" has status "Ready":"False"
	I0116 03:25:03.528245  496516 pod_ready.go:81] duration metric: took 36.825947ms waiting for pod "kube-controller-manager-test-preload-281181" in "kube-system" namespace to be "Ready" ...
	E0116 03:25:03.528258  496516 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-281181" hosting pod "kube-controller-manager-test-preload-281181" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-281181" has status "Ready":"False"
	I0116 03:25:03.528270  496516 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-tsn82" in "kube-system" namespace to be "Ready" ...
	I0116 03:25:03.925947  496516 pod_ready.go:97] node "test-preload-281181" hosting pod "kube-proxy-tsn82" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-281181" has status "Ready":"False"
	I0116 03:25:03.925982  496516 pod_ready.go:81] duration metric: took 397.698755ms waiting for pod "kube-proxy-tsn82" in "kube-system" namespace to be "Ready" ...
	E0116 03:25:03.925996  496516 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-281181" hosting pod "kube-proxy-tsn82" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-281181" has status "Ready":"False"
	I0116 03:25:03.926004  496516 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-test-preload-281181" in "kube-system" namespace to be "Ready" ...
	I0116 03:25:04.323966  496516 pod_ready.go:97] node "test-preload-281181" hosting pod "kube-scheduler-test-preload-281181" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-281181" has status "Ready":"False"
	I0116 03:25:04.324008  496516 pod_ready.go:81] duration metric: took 397.990319ms waiting for pod "kube-scheduler-test-preload-281181" in "kube-system" namespace to be "Ready" ...
	E0116 03:25:04.324021  496516 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-281181" hosting pod "kube-scheduler-test-preload-281181" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-281181" has status "Ready":"False"
	I0116 03:25:04.324030  496516 pod_ready.go:38] duration metric: took 863.561819ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:25:04.324118  496516 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0116 03:25:04.337204  496516 ops.go:34] apiserver oom_adj: -16
	I0116 03:25:04.337230  496516 kubeadm.go:640] restartCluster took 21.257220166s
	I0116 03:25:04.337242  496516 kubeadm.go:406] StartCluster complete in 21.303134873s
	I0116 03:25:04.337266  496516 settings.go:142] acquiring lock: {Name:mk3ded99c06d285b174e76622bb0741c8dd2d8fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:25:04.337378  496516 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17965-468241/kubeconfig
	I0116 03:25:04.338287  496516 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-468241/kubeconfig: {Name:mkac3c63bbcba9b4fa1bea3480797757d703fb9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:25:04.338536  496516 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0116 03:25:04.338614  496516 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0116 03:25:04.338701  496516 addons.go:69] Setting storage-provisioner=true in profile "test-preload-281181"
	I0116 03:25:04.338721  496516 addons.go:69] Setting default-storageclass=true in profile "test-preload-281181"
	I0116 03:25:04.338723  496516 addons.go:234] Setting addon storage-provisioner=true in "test-preload-281181"
	W0116 03:25:04.338734  496516 addons.go:243] addon storage-provisioner should already be in state true
	I0116 03:25:04.338738  496516 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-281181"
	I0116 03:25:04.338784  496516 host.go:66] Checking if "test-preload-281181" exists ...
	I0116 03:25:04.338796  496516 config.go:182] Loaded profile config "test-preload-281181": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0116 03:25:04.339080  496516 kapi.go:59] client config for test-preload-281181: &rest.Config{Host:"https://192.168.39.102:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17965-468241/.minikube/profiles/test-preload-281181/client.crt", KeyFile:"/home/jenkins/minikube-integration/17965-468241/.minikube/profiles/test-preload-281181/client.key", CAFile:"/home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19be0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0116 03:25:04.339200  496516 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:25:04.339200  496516 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:25:04.339252  496516 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:25:04.339333  496516 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:25:04.345277  496516 kapi.go:248] "coredns" deployment in "kube-system" namespace and "test-preload-281181" context rescaled to 1 replicas
	I0116 03:25:04.345338  496516 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 03:25:04.347913  496516 out.go:177] * Verifying Kubernetes components...
	I0116 03:25:04.349726  496516 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:25:04.355191  496516 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44567
	I0116 03:25:04.355755  496516 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:25:04.356366  496516 main.go:141] libmachine: Using API Version  1
	I0116 03:25:04.356402  496516 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:25:04.356880  496516 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:25:04.357135  496516 main.go:141] libmachine: (test-preload-281181) Calling .GetState
	I0116 03:25:04.357192  496516 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33893
	I0116 03:25:04.357584  496516 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:25:04.358043  496516 main.go:141] libmachine: Using API Version  1
	I0116 03:25:04.358069  496516 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:25:04.358480  496516 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:25:04.359165  496516 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:25:04.359213  496516 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:25:04.360198  496516 kapi.go:59] client config for test-preload-281181: &rest.Config{Host:"https://192.168.39.102:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17965-468241/.minikube/profiles/test-preload-281181/client.crt", KeyFile:"/home/jenkins/minikube-integration/17965-468241/.minikube/profiles/test-preload-281181/client.key", CAFile:"/home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19be0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0116 03:25:04.360554  496516 addons.go:234] Setting addon default-storageclass=true in "test-preload-281181"
	W0116 03:25:04.360576  496516 addons.go:243] addon default-storageclass should already be in state true
	I0116 03:25:04.360608  496516 host.go:66] Checking if "test-preload-281181" exists ...
	I0116 03:25:04.361026  496516 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:25:04.361068  496516 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:25:04.374955  496516 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38305
	I0116 03:25:04.375472  496516 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:25:04.376127  496516 main.go:141] libmachine: Using API Version  1
	I0116 03:25:04.376167  496516 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:25:04.376543  496516 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:25:04.376859  496516 main.go:141] libmachine: (test-preload-281181) Calling .GetState
	I0116 03:25:04.377759  496516 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36073
	I0116 03:25:04.378195  496516 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:25:04.378689  496516 main.go:141] libmachine: Using API Version  1
	I0116 03:25:04.378719  496516 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:25:04.378862  496516 main.go:141] libmachine: (test-preload-281181) Calling .DriverName
	I0116 03:25:04.381530  496516 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:25:04.379171  496516 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:25:04.382171  496516 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:25:04.383481  496516 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 03:25:04.383498  496516 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0116 03:25:04.383529  496516 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:25:04.383612  496516 main.go:141] libmachine: (test-preload-281181) Calling .GetSSHHostname
	I0116 03:25:04.387380  496516 main.go:141] libmachine: (test-preload-281181) DBG | domain test-preload-281181 has defined MAC address 52:54:00:66:33:72 in network mk-test-preload-281181
	I0116 03:25:04.387680  496516 main.go:141] libmachine: (test-preload-281181) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:33:72", ip: ""} in network mk-test-preload-281181: {Iface:virbr1 ExpiryTime:2024-01-16 04:22:16 +0000 UTC Type:0 Mac:52:54:00:66:33:72 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:test-preload-281181 Clientid:01:52:54:00:66:33:72}
	I0116 03:25:04.387704  496516 main.go:141] libmachine: (test-preload-281181) DBG | domain test-preload-281181 has defined IP address 192.168.39.102 and MAC address 52:54:00:66:33:72 in network mk-test-preload-281181
	I0116 03:25:04.388003  496516 main.go:141] libmachine: (test-preload-281181) Calling .GetSSHPort
	I0116 03:25:04.388253  496516 main.go:141] libmachine: (test-preload-281181) Calling .GetSSHKeyPath
	I0116 03:25:04.388455  496516 main.go:141] libmachine: (test-preload-281181) Calling .GetSSHUsername
	I0116 03:25:04.388678  496516 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/test-preload-281181/id_rsa Username:docker}
	I0116 03:25:04.402747  496516 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39489
	I0116 03:25:04.403225  496516 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:25:04.403747  496516 main.go:141] libmachine: Using API Version  1
	I0116 03:25:04.403773  496516 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:25:04.404290  496516 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:25:04.404544  496516 main.go:141] libmachine: (test-preload-281181) Calling .GetState
	I0116 03:25:04.406409  496516 main.go:141] libmachine: (test-preload-281181) Calling .DriverName
	I0116 03:25:04.406740  496516 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0116 03:25:04.406758  496516 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0116 03:25:04.406784  496516 main.go:141] libmachine: (test-preload-281181) Calling .GetSSHHostname
	I0116 03:25:04.409416  496516 main.go:141] libmachine: (test-preload-281181) DBG | domain test-preload-281181 has defined MAC address 52:54:00:66:33:72 in network mk-test-preload-281181
	I0116 03:25:04.409875  496516 main.go:141] libmachine: (test-preload-281181) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:33:72", ip: ""} in network mk-test-preload-281181: {Iface:virbr1 ExpiryTime:2024-01-16 04:22:16 +0000 UTC Type:0 Mac:52:54:00:66:33:72 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:test-preload-281181 Clientid:01:52:54:00:66:33:72}
	I0116 03:25:04.409904  496516 main.go:141] libmachine: (test-preload-281181) DBG | domain test-preload-281181 has defined IP address 192.168.39.102 and MAC address 52:54:00:66:33:72 in network mk-test-preload-281181
	I0116 03:25:04.410094  496516 main.go:141] libmachine: (test-preload-281181) Calling .GetSSHPort
	I0116 03:25:04.410299  496516 main.go:141] libmachine: (test-preload-281181) Calling .GetSSHKeyPath
	I0116 03:25:04.410456  496516 main.go:141] libmachine: (test-preload-281181) Calling .GetSSHUsername
	I0116 03:25:04.410638  496516 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/test-preload-281181/id_rsa Username:docker}
	I0116 03:25:04.507388  496516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 03:25:04.599231  496516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0116 03:25:04.607586  496516 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0116 03:25:04.607606  496516 node_ready.go:35] waiting up to 6m0s for node "test-preload-281181" to be "Ready" ...
	I0116 03:25:05.647563  496516 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.140127395s)
	I0116 03:25:05.647628  496516 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.048366399s)
	I0116 03:25:05.647634  496516 main.go:141] libmachine: Making call to close driver server
	I0116 03:25:05.647647  496516 main.go:141] libmachine: (test-preload-281181) Calling .Close
	I0116 03:25:05.647654  496516 main.go:141] libmachine: Making call to close driver server
	I0116 03:25:05.647664  496516 main.go:141] libmachine: (test-preload-281181) Calling .Close
	I0116 03:25:05.648085  496516 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:25:05.648087  496516 main.go:141] libmachine: (test-preload-281181) DBG | Closing plugin on server side
	I0116 03:25:05.648142  496516 main.go:141] libmachine: (test-preload-281181) DBG | Closing plugin on server side
	I0116 03:25:05.648164  496516 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:25:05.648178  496516 main.go:141] libmachine: Making call to close driver server
	I0116 03:25:05.648190  496516 main.go:141] libmachine: (test-preload-281181) Calling .Close
	I0116 03:25:05.648249  496516 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:25:05.648268  496516 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:25:05.648283  496516 main.go:141] libmachine: Making call to close driver server
	I0116 03:25:05.648293  496516 main.go:141] libmachine: (test-preload-281181) Calling .Close
	I0116 03:25:05.648423  496516 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:25:05.648441  496516 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:25:05.648531  496516 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:25:05.648546  496516 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:25:05.648549  496516 main.go:141] libmachine: (test-preload-281181) DBG | Closing plugin on server side
	I0116 03:25:05.659073  496516 main.go:141] libmachine: Making call to close driver server
	I0116 03:25:05.659101  496516 main.go:141] libmachine: (test-preload-281181) Calling .Close
	I0116 03:25:05.659451  496516 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:25:05.659474  496516 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:25:05.659508  496516 main.go:141] libmachine: (test-preload-281181) DBG | Closing plugin on server side
	I0116 03:25:05.663108  496516 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0116 03:25:05.664713  496516 addons.go:505] enable addons completed in 1.326107939s: enabled=[storage-provisioner default-storageclass]
	I0116 03:25:06.612988  496516 node_ready.go:58] node "test-preload-281181" has status "Ready":"False"
	I0116 03:25:09.111718  496516 node_ready.go:58] node "test-preload-281181" has status "Ready":"False"
	I0116 03:25:11.113027  496516 node_ready.go:58] node "test-preload-281181" has status "Ready":"False"
	I0116 03:25:12.612496  496516 node_ready.go:49] node "test-preload-281181" has status "Ready":"True"
	I0116 03:25:12.612529  496516 node_ready.go:38] duration metric: took 8.004890794s waiting for node "test-preload-281181" to be "Ready" ...
	I0116 03:25:12.612542  496516 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:25:12.618885  496516 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-xmlxl" in "kube-system" namespace to be "Ready" ...
	I0116 03:25:12.625615  496516 pod_ready.go:92] pod "coredns-6d4b75cb6d-xmlxl" in "kube-system" namespace has status "Ready":"True"
	I0116 03:25:12.625653  496516 pod_ready.go:81] duration metric: took 6.725264ms waiting for pod "coredns-6d4b75cb6d-xmlxl" in "kube-system" namespace to be "Ready" ...
	I0116 03:25:12.625664  496516 pod_ready.go:78] waiting up to 6m0s for pod "etcd-test-preload-281181" in "kube-system" namespace to be "Ready" ...
	I0116 03:25:13.133033  496516 pod_ready.go:92] pod "etcd-test-preload-281181" in "kube-system" namespace has status "Ready":"True"
	I0116 03:25:13.133064  496516 pod_ready.go:81] duration metric: took 507.393936ms waiting for pod "etcd-test-preload-281181" in "kube-system" namespace to be "Ready" ...
	I0116 03:25:13.133074  496516 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-test-preload-281181" in "kube-system" namespace to be "Ready" ...
	I0116 03:25:13.139998  496516 pod_ready.go:92] pod "kube-apiserver-test-preload-281181" in "kube-system" namespace has status "Ready":"True"
	I0116 03:25:13.140026  496516 pod_ready.go:81] duration metric: took 6.944314ms waiting for pod "kube-apiserver-test-preload-281181" in "kube-system" namespace to be "Ready" ...
	I0116 03:25:13.140054  496516 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-test-preload-281181" in "kube-system" namespace to be "Ready" ...
	I0116 03:25:13.145974  496516 pod_ready.go:92] pod "kube-controller-manager-test-preload-281181" in "kube-system" namespace has status "Ready":"True"
	I0116 03:25:13.146007  496516 pod_ready.go:81] duration metric: took 5.941151ms waiting for pod "kube-controller-manager-test-preload-281181" in "kube-system" namespace to be "Ready" ...
	I0116 03:25:13.146021  496516 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tsn82" in "kube-system" namespace to be "Ready" ...
	I0116 03:25:13.413246  496516 pod_ready.go:92] pod "kube-proxy-tsn82" in "kube-system" namespace has status "Ready":"True"
	I0116 03:25:13.413279  496516 pod_ready.go:81] duration metric: took 267.244821ms waiting for pod "kube-proxy-tsn82" in "kube-system" namespace to be "Ready" ...
	I0116 03:25:13.413289  496516 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-test-preload-281181" in "kube-system" namespace to be "Ready" ...
	I0116 03:25:15.420343  496516 pod_ready.go:102] pod "kube-scheduler-test-preload-281181" in "kube-system" namespace has status "Ready":"False"
	I0116 03:25:16.920207  496516 pod_ready.go:92] pod "kube-scheduler-test-preload-281181" in "kube-system" namespace has status "Ready":"True"
	I0116 03:25:16.920235  496516 pod_ready.go:81] duration metric: took 3.506939003s waiting for pod "kube-scheduler-test-preload-281181" in "kube-system" namespace to be "Ready" ...
	I0116 03:25:16.920246  496516 pod_ready.go:38] duration metric: took 4.307692601s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:25:16.920263  496516 api_server.go:52] waiting for apiserver process to appear ...
	I0116 03:25:16.920369  496516 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:25:16.934359  496516 api_server.go:72] duration metric: took 12.588979557s to wait for apiserver process to appear ...
	I0116 03:25:16.934395  496516 api_server.go:88] waiting for apiserver healthz status ...
	I0116 03:25:16.934420  496516 api_server.go:253] Checking apiserver healthz at https://192.168.39.102:8443/healthz ...
	I0116 03:25:16.940423  496516 api_server.go:279] https://192.168.39.102:8443/healthz returned 200:
	ok
	I0116 03:25:16.941300  496516 api_server.go:141] control plane version: v1.24.4
	I0116 03:25:16.941323  496516 api_server.go:131] duration metric: took 6.921668ms to wait for apiserver health ...
	I0116 03:25:16.941333  496516 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 03:25:16.946221  496516 system_pods.go:59] 7 kube-system pods found
	I0116 03:25:16.946251  496516 system_pods.go:61] "coredns-6d4b75cb6d-xmlxl" [b8b58db1-b7c3-4712-b34d-3bb3b260231e] Running
	I0116 03:25:16.946256  496516 system_pods.go:61] "etcd-test-preload-281181" [fbc5cacb-57f8-4e8b-83cd-2b44fa5241d9] Running
	I0116 03:25:16.946260  496516 system_pods.go:61] "kube-apiserver-test-preload-281181" [401e5c9d-c3f0-4413-ae47-8d37ac5d3a19] Running
	I0116 03:25:16.946264  496516 system_pods.go:61] "kube-controller-manager-test-preload-281181" [7a6f36f2-c6a0-4c28-b6b4-63842a1c38b8] Running
	I0116 03:25:16.946268  496516 system_pods.go:61] "kube-proxy-tsn82" [44643e28-4e07-4551-bb4e-339b66ff612e] Running
	I0116 03:25:16.946273  496516 system_pods.go:61] "kube-scheduler-test-preload-281181" [59d3b6f4-c519-4903-8d4c-22b6523069e2] Running
	I0116 03:25:16.946279  496516 system_pods.go:61] "storage-provisioner" [ed27080f-9b99-4e19-9103-c2668a2821dd] Running
	I0116 03:25:16.946286  496516 system_pods.go:74] duration metric: took 4.946085ms to wait for pod list to return data ...
	I0116 03:25:16.946300  496516 default_sa.go:34] waiting for default service account to be created ...
	I0116 03:25:17.013239  496516 default_sa.go:45] found service account: "default"
	I0116 03:25:17.013269  496516 default_sa.go:55] duration metric: took 66.961652ms for default service account to be created ...
	I0116 03:25:17.013279  496516 system_pods.go:116] waiting for k8s-apps to be running ...
	I0116 03:25:17.216749  496516 system_pods.go:86] 7 kube-system pods found
	I0116 03:25:17.216780  496516 system_pods.go:89] "coredns-6d4b75cb6d-xmlxl" [b8b58db1-b7c3-4712-b34d-3bb3b260231e] Running
	I0116 03:25:17.216786  496516 system_pods.go:89] "etcd-test-preload-281181" [fbc5cacb-57f8-4e8b-83cd-2b44fa5241d9] Running
	I0116 03:25:17.216790  496516 system_pods.go:89] "kube-apiserver-test-preload-281181" [401e5c9d-c3f0-4413-ae47-8d37ac5d3a19] Running
	I0116 03:25:17.216795  496516 system_pods.go:89] "kube-controller-manager-test-preload-281181" [7a6f36f2-c6a0-4c28-b6b4-63842a1c38b8] Running
	I0116 03:25:17.216801  496516 system_pods.go:89] "kube-proxy-tsn82" [44643e28-4e07-4551-bb4e-339b66ff612e] Running
	I0116 03:25:17.216806  496516 system_pods.go:89] "kube-scheduler-test-preload-281181" [59d3b6f4-c519-4903-8d4c-22b6523069e2] Running
	I0116 03:25:17.216810  496516 system_pods.go:89] "storage-provisioner" [ed27080f-9b99-4e19-9103-c2668a2821dd] Running
	I0116 03:25:17.216818  496516 system_pods.go:126] duration metric: took 203.534241ms to wait for k8s-apps to be running ...
	I0116 03:25:17.216828  496516 system_svc.go:44] waiting for kubelet service to be running ....
	I0116 03:25:17.216880  496516 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:25:17.231636  496516 system_svc.go:56] duration metric: took 14.794509ms WaitForService to wait for kubelet.
	I0116 03:25:17.231691  496516 kubeadm.go:581] duration metric: took 12.886319126s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0116 03:25:17.231724  496516 node_conditions.go:102] verifying NodePressure condition ...
	I0116 03:25:17.412408  496516 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 03:25:17.412443  496516 node_conditions.go:123] node cpu capacity is 2
	I0116 03:25:17.412456  496516 node_conditions.go:105] duration metric: took 180.726664ms to run NodePressure ...
	I0116 03:25:17.412467  496516 start.go:228] waiting for startup goroutines ...
	I0116 03:25:17.412473  496516 start.go:233] waiting for cluster config update ...
	I0116 03:25:17.412483  496516 start.go:242] writing updated cluster config ...
	I0116 03:25:17.412751  496516 ssh_runner.go:195] Run: rm -f paused
	I0116 03:25:17.465648  496516 start.go:600] kubectl: 1.29.0, cluster: 1.24.4 (minor skew: 5)
	I0116 03:25:17.468379  496516 out.go:177] 
	W0116 03:25:17.470323  496516 out.go:239] ! /usr/local/bin/kubectl is version 1.29.0, which may have incompatibilities with Kubernetes 1.24.4.
	I0116 03:25:17.471951  496516 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0116 03:25:17.473489  496516 out.go:177] * Done! kubectl is now configured to use "test-preload-281181" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Tue 2024-01-16 03:24:19 UTC, ends at Tue 2024-01-16 03:25:18 UTC. --
	Jan 16 03:25:18 test-preload-281181 crio[698]: time="2024-01-16 03:25:18.535457673Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=b45bbb8d-a345-45bf-9d07-ea8e25513a97 name=/runtime.v1.RuntimeService/Version
	Jan 16 03:25:18 test-preload-281181 crio[698]: time="2024-01-16 03:25:18.536875159Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=5800aee9-a2f3-44a4-9a1a-0acc004af576 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 03:25:18 test-preload-281181 crio[698]: time="2024-01-16 03:25:18.537401547Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705375518537387725,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:77,},},},}" file="go-grpc-middleware/chain.go:25" id=5800aee9-a2f3-44a4-9a1a-0acc004af576 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 03:25:18 test-preload-281181 crio[698]: time="2024-01-16 03:25:18.538289924Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=9bacbe7f-2161-466d-9b3e-35e920ee40f6 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:25:18 test-preload-281181 crio[698]: time="2024-01-16 03:25:18.538360565Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=9bacbe7f-2161-466d-9b3e-35e920ee40f6 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:25:18 test-preload-281181 crio[698]: time="2024-01-16 03:25:18.538532803Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dad57965937261b155c9d9c8e8022e21cf89ed13605e910f042002707d44aac4,PodSandboxId:4a122c746631620910156883f1740ba10c499cfc897e451cfc2427a0999aa6b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e,State:CONTAINER_RUNNING,CreatedAt:1705375506831982707,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-xmlxl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8b58db1-b7c3-4712-b34d-3bb3b260231e,},Annotations:map[string]string{io.kubernetes.container.hash: 83ed866,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37c30b77651418cd3deb88f7ca6bd15031c94048fac5312f7e740edc6f451646,PodSandboxId:dc4d0ad8682b8bde8305f16a3558b2d81dca7fe5d8fa63c0fe1fed9b4ad3d390,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705375504664281597,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: ed27080f-9b99-4e19-9103-c2668a2821dd,},Annotations:map[string]string{io.kubernetes.container.hash: 43e8334,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed56790f6c2343cd2de6f1e5d907fec31abf91bfb9374a7bcdf1646f3c16712e,PodSandboxId:dc4d0ad8682b8bde8305f16a3558b2d81dca7fe5d8fa63c0fe1fed9b4ad3d390,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1705375503933656407,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: ed27080f-9b99-4e19-9103-c2668a2821dd,},Annotations:map[string]string{io.kubernetes.container.hash: 43e8334,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d071a43ed7c1deba833cc80b7b55408d8195432b3a1bc27ebbdc38e9dc7db74,PodSandboxId:ec14f84c5c95a9496a9a62cb1441b98a31e57e7b63079341880ded10d3f0126d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:64a04a34b31fdf10b4c7fe9ff006dab818489a318115cfb284010d04e2888386,State:CONTAINER_RUNNING,CreatedAt:1705375503659985669,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tsn82,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44643
e28-4e07-4551-bb4e-339b66ff612e,},Annotations:map[string]string{io.kubernetes.container.hash: f6ed73b8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7ce9af61596ca4fee2fc85f7203ea939d2364132c293d4534e9e07f5f9b3432,PodSandboxId:034ee43ef3710f80929977914a17d44af0b8c833a58269fd5feb69d99fb1960e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5,State:CONTAINER_RUNNING,CreatedAt:1705375496213247332,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-281181,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85e8ae3e0fca2b109e954f9e83d56e0f,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 1cc4fd5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23db95503f362a5fd9be0e7c470a2c2be1c85f26a60fe10ab3218b05e6be5bb5,PodSandboxId:c926b9479efa1c4b735b43b96665f5ab9630800d6928777326b38be901371898,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:77905fafb19047cc426efb6062757e244dc1615a93b0915792f598642229d891,State:CONTAINER_RUNNING,CreatedAt:1705375495938297351,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-281181,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1085cd8619781808be4719f4dd2659c,
},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dddefaeb67c06f5d7437b770d9816c3ca246256f6d15e961c997c1d7e22cb627,PodSandboxId:5edd8ff49c7daa3302d4acca8fbf53bc7e40af3b08014858a17ce62b83f271b6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:378509dd1111937ca2791cf4c4814bc0647714e2ab2f4fc15396707ad1a987a2,State:CONTAINER_RUNNING,CreatedAt:1705375495701539420,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-281181,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad0a8b424ba2d12a8bd1f29b90eb828c,},Annotations:
map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8daaf79763554fb2b395201588ed594570e03a5823b40431b837268c14b853f,PodSandboxId:df4d473399bc44856f3369df4a88801f6694199ba8da6f414c0b3bdda5c3245b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:4b6a3a220cb91e496cff56f267968c4bbb19d8593c21137d99b9bc735cb64857,State:CONTAINER_RUNNING,CreatedAt:1705375495775045754,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-281181,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3358f12c7f62e24f96e4d92e880e2c0,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 2d446c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=9bacbe7f-2161-466d-9b3e-35e920ee40f6 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:25:18 test-preload-281181 crio[698]: time="2024-01-16 03:25:18.579811178Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=46cc6648-7275-4963-8735-72448cb2896c name=/runtime.v1.RuntimeService/Version
	Jan 16 03:25:18 test-preload-281181 crio[698]: time="2024-01-16 03:25:18.579870909Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=46cc6648-7275-4963-8735-72448cb2896c name=/runtime.v1.RuntimeService/Version
	Jan 16 03:25:18 test-preload-281181 crio[698]: time="2024-01-16 03:25:18.581296673Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=59599ad5-2dc9-4953-bb0b-fd0665ac2461 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 03:25:18 test-preload-281181 crio[698]: time="2024-01-16 03:25:18.581729664Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705375518581715208,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:77,},},},}" file="go-grpc-middleware/chain.go:25" id=59599ad5-2dc9-4953-bb0b-fd0665ac2461 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 03:25:18 test-preload-281181 crio[698]: time="2024-01-16 03:25:18.582472958Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=fd942876-85fb-4c64-92ea-2d94ead73e64 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:25:18 test-preload-281181 crio[698]: time="2024-01-16 03:25:18.582533814Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=fd942876-85fb-4c64-92ea-2d94ead73e64 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:25:18 test-preload-281181 crio[698]: time="2024-01-16 03:25:18.582700806Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dad57965937261b155c9d9c8e8022e21cf89ed13605e910f042002707d44aac4,PodSandboxId:4a122c746631620910156883f1740ba10c499cfc897e451cfc2427a0999aa6b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e,State:CONTAINER_RUNNING,CreatedAt:1705375506831982707,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-xmlxl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8b58db1-b7c3-4712-b34d-3bb3b260231e,},Annotations:map[string]string{io.kubernetes.container.hash: 83ed866,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37c30b77651418cd3deb88f7ca6bd15031c94048fac5312f7e740edc6f451646,PodSandboxId:dc4d0ad8682b8bde8305f16a3558b2d81dca7fe5d8fa63c0fe1fed9b4ad3d390,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705375504664281597,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: ed27080f-9b99-4e19-9103-c2668a2821dd,},Annotations:map[string]string{io.kubernetes.container.hash: 43e8334,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed56790f6c2343cd2de6f1e5d907fec31abf91bfb9374a7bcdf1646f3c16712e,PodSandboxId:dc4d0ad8682b8bde8305f16a3558b2d81dca7fe5d8fa63c0fe1fed9b4ad3d390,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1705375503933656407,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: ed27080f-9b99-4e19-9103-c2668a2821dd,},Annotations:map[string]string{io.kubernetes.container.hash: 43e8334,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d071a43ed7c1deba833cc80b7b55408d8195432b3a1bc27ebbdc38e9dc7db74,PodSandboxId:ec14f84c5c95a9496a9a62cb1441b98a31e57e7b63079341880ded10d3f0126d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:64a04a34b31fdf10b4c7fe9ff006dab818489a318115cfb284010d04e2888386,State:CONTAINER_RUNNING,CreatedAt:1705375503659985669,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tsn82,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44643
e28-4e07-4551-bb4e-339b66ff612e,},Annotations:map[string]string{io.kubernetes.container.hash: f6ed73b8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7ce9af61596ca4fee2fc85f7203ea939d2364132c293d4534e9e07f5f9b3432,PodSandboxId:034ee43ef3710f80929977914a17d44af0b8c833a58269fd5feb69d99fb1960e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5,State:CONTAINER_RUNNING,CreatedAt:1705375496213247332,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-281181,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85e8ae3e0fca2b109e954f9e83d56e0f,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 1cc4fd5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23db95503f362a5fd9be0e7c470a2c2be1c85f26a60fe10ab3218b05e6be5bb5,PodSandboxId:c926b9479efa1c4b735b43b96665f5ab9630800d6928777326b38be901371898,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:77905fafb19047cc426efb6062757e244dc1615a93b0915792f598642229d891,State:CONTAINER_RUNNING,CreatedAt:1705375495938297351,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-281181,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1085cd8619781808be4719f4dd2659c,
},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dddefaeb67c06f5d7437b770d9816c3ca246256f6d15e961c997c1d7e22cb627,PodSandboxId:5edd8ff49c7daa3302d4acca8fbf53bc7e40af3b08014858a17ce62b83f271b6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:378509dd1111937ca2791cf4c4814bc0647714e2ab2f4fc15396707ad1a987a2,State:CONTAINER_RUNNING,CreatedAt:1705375495701539420,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-281181,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad0a8b424ba2d12a8bd1f29b90eb828c,},Annotations:
map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8daaf79763554fb2b395201588ed594570e03a5823b40431b837268c14b853f,PodSandboxId:df4d473399bc44856f3369df4a88801f6694199ba8da6f414c0b3bdda5c3245b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:4b6a3a220cb91e496cff56f267968c4bbb19d8593c21137d99b9bc735cb64857,State:CONTAINER_RUNNING,CreatedAt:1705375495775045754,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-281181,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3358f12c7f62e24f96e4d92e880e2c0,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 2d446c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=fd942876-85fb-4c64-92ea-2d94ead73e64 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:25:18 test-preload-281181 crio[698]: time="2024-01-16 03:25:18.599122356Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b2deeb93-7119-4343-9d6d-b49d91763232 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jan 16 03:25:18 test-preload-281181 crio[698]: time="2024-01-16 03:25:18.599361880Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:4a122c746631620910156883f1740ba10c499cfc897e451cfc2427a0999aa6b1,Metadata:&PodSandboxMetadata{Name:coredns-6d4b75cb6d-xmlxl,Uid:b8b58db1-b7c3-4712-b34d-3bb3b260231e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705375506224506185,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6d4b75cb6d-xmlxl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8b58db1-b7c3-4712-b34d-3bb3b260231e,k8s-app: kube-dns,pod-template-hash: 6d4b75cb6d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-16T03:25:02.454522625Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:dc4d0ad8682b8bde8305f16a3558b2d81dca7fe5d8fa63c0fe1fed9b4ad3d390,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:ed27080f-9b99-4e19-9103-c2668a2821dd,Namespace:kube-syste
m,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705375503389530975,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed27080f-9b99-4e19-9103-c2668a2821dd,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath
\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-01-16T03:25:02.454521439Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ec14f84c5c95a9496a9a62cb1441b98a31e57e7b63079341880ded10d3f0126d,Metadata:&PodSandboxMetadata{Name:kube-proxy-tsn82,Uid:44643e28-4e07-4551-bb4e-339b66ff612e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705375503092433345,Labels:map[string]string{controller-revision-hash: 6fd4744df8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-tsn82,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44643e28-4e07-4551-bb4e-339b66ff612e,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-16T03:25:02.454518912Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:df4d473399bc44856f3369df4a88801f6694199ba8da6f414c0b3bdda5c3245b,Metadata:&PodSandboxMetadata{Name:kube-apiserver-test-preload-281181,Uid:e3358f1
2c7f62e24f96e4d92e880e2c0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705375495078353776,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-test-preload-281181,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3358f12c7f62e24f96e4d92e880e2c0,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.102:8443,kubernetes.io/config.hash: e3358f12c7f62e24f96e4d92e880e2c0,kubernetes.io/config.seen: 2024-01-16T03:24:54.464807683Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:034ee43ef3710f80929977914a17d44af0b8c833a58269fd5feb69d99fb1960e,Metadata:&PodSandboxMetadata{Name:etcd-test-preload-281181,Uid:85e8ae3e0fca2b109e954f9e83d56e0f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705375495074886188,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-
test-preload-281181,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85e8ae3e0fca2b109e954f9e83d56e0f,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.102:2379,kubernetes.io/config.hash: 85e8ae3e0fca2b109e954f9e83d56e0f,kubernetes.io/config.seen: 2024-01-16T03:24:54.481357216Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:5edd8ff49c7daa3302d4acca8fbf53bc7e40af3b08014858a17ce62b83f271b6,Metadata:&PodSandboxMetadata{Name:kube-scheduler-test-preload-281181,Uid:ad0a8b424ba2d12a8bd1f29b90eb828c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705375495071546436,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-test-preload-281181,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad0a8b424ba2d12a8bd1f29b90eb828c,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: ad0a8b424ba2d12
a8bd1f29b90eb828c,kubernetes.io/config.seen: 2024-01-16T03:24:54.464805983Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c926b9479efa1c4b735b43b96665f5ab9630800d6928777326b38be901371898,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-test-preload-281181,Uid:c1085cd8619781808be4719f4dd2659c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1705375495066960645,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-test-preload-281181,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1085cd8619781808be4719f4dd2659c,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: c1085cd8619781808be4719f4dd2659c,kubernetes.io/config.seen: 2024-01-16T03:24:54.464787370Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=b2deeb93-7119-4343-9d6d-b49d91763232 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jan 16 03:25:18 test-preload-281181 crio[698]: time="2024-01-16 03:25:18.600079700Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=39b09713-ca1d-4d69-93f5-09d05937f3ea name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:25:18 test-preload-281181 crio[698]: time="2024-01-16 03:25:18.600136894Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=39b09713-ca1d-4d69-93f5-09d05937f3ea name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:25:18 test-preload-281181 crio[698]: time="2024-01-16 03:25:18.600376405Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dad57965937261b155c9d9c8e8022e21cf89ed13605e910f042002707d44aac4,PodSandboxId:4a122c746631620910156883f1740ba10c499cfc897e451cfc2427a0999aa6b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e,State:CONTAINER_RUNNING,CreatedAt:1705375506831982707,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-xmlxl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8b58db1-b7c3-4712-b34d-3bb3b260231e,},Annotations:map[string]string{io.kubernetes.container.hash: 83ed866,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37c30b77651418cd3deb88f7ca6bd15031c94048fac5312f7e740edc6f451646,PodSandboxId:dc4d0ad8682b8bde8305f16a3558b2d81dca7fe5d8fa63c0fe1fed9b4ad3d390,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705375504664281597,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: ed27080f-9b99-4e19-9103-c2668a2821dd,},Annotations:map[string]string{io.kubernetes.container.hash: 43e8334,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d071a43ed7c1deba833cc80b7b55408d8195432b3a1bc27ebbdc38e9dc7db74,PodSandboxId:ec14f84c5c95a9496a9a62cb1441b98a31e57e7b63079341880ded10d3f0126d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:64a04a34b31fdf10b4c7fe9ff006dab818489a318115cfb284010d04e2888386,State:CONTAINER_RUNNING,CreatedAt:1705375503659985669,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tsn82,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44
643e28-4e07-4551-bb4e-339b66ff612e,},Annotations:map[string]string{io.kubernetes.container.hash: f6ed73b8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7ce9af61596ca4fee2fc85f7203ea939d2364132c293d4534e9e07f5f9b3432,PodSandboxId:034ee43ef3710f80929977914a17d44af0b8c833a58269fd5feb69d99fb1960e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5,State:CONTAINER_RUNNING,CreatedAt:1705375496213247332,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-281181,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85e8ae3e0fca2b109e954f9e83d56e0f,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 1cc4fd5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23db95503f362a5fd9be0e7c470a2c2be1c85f26a60fe10ab3218b05e6be5bb5,PodSandboxId:c926b9479efa1c4b735b43b96665f5ab9630800d6928777326b38be901371898,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:77905fafb19047cc426efb6062757e244dc1615a93b0915792f598642229d891,State:CONTAINER_RUNNING,CreatedAt:1705375495938297351,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-281181,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1085cd8619781808be4719f4dd265
9c,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dddefaeb67c06f5d7437b770d9816c3ca246256f6d15e961c997c1d7e22cb627,PodSandboxId:5edd8ff49c7daa3302d4acca8fbf53bc7e40af3b08014858a17ce62b83f271b6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:378509dd1111937ca2791cf4c4814bc0647714e2ab2f4fc15396707ad1a987a2,State:CONTAINER_RUNNING,CreatedAt:1705375495701539420,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-281181,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad0a8b424ba2d12a8bd1f29b90eb828c,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8daaf79763554fb2b395201588ed594570e03a5823b40431b837268c14b853f,PodSandboxId:df4d473399bc44856f3369df4a88801f6694199ba8da6f414c0b3bdda5c3245b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:4b6a3a220cb91e496cff56f267968c4bbb19d8593c21137d99b9bc735cb64857,State:CONTAINER_RUNNING,CreatedAt:1705375495775045754,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-281181,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3358f12c7f62e24f96e4d92e880e2c0,},Annotations:map[string]
string{io.kubernetes.container.hash: 2d446c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=39b09713-ca1d-4d69-93f5-09d05937f3ea name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:25:18 test-preload-281181 crio[698]: time="2024-01-16 03:25:18.621974673Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=67ffdcdd-1e6e-46c1-9183-6233bc6ce759 name=/runtime.v1.RuntimeService/Version
	Jan 16 03:25:18 test-preload-281181 crio[698]: time="2024-01-16 03:25:18.622059011Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=67ffdcdd-1e6e-46c1-9183-6233bc6ce759 name=/runtime.v1.RuntimeService/Version
	Jan 16 03:25:18 test-preload-281181 crio[698]: time="2024-01-16 03:25:18.623980202Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=08e0cfee-425d-4e4b-9f89-bb867aba58e5 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 03:25:18 test-preload-281181 crio[698]: time="2024-01-16 03:25:18.624469232Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705375518624453700,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:77,},},},}" file="go-grpc-middleware/chain.go:25" id=08e0cfee-425d-4e4b-9f89-bb867aba58e5 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 03:25:18 test-preload-281181 crio[698]: time="2024-01-16 03:25:18.625232814Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a2b079dc-fea8-439a-93dc-981140018322 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:25:18 test-preload-281181 crio[698]: time="2024-01-16 03:25:18.625280378Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a2b079dc-fea8-439a-93dc-981140018322 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:25:18 test-preload-281181 crio[698]: time="2024-01-16 03:25:18.625456384Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dad57965937261b155c9d9c8e8022e21cf89ed13605e910f042002707d44aac4,PodSandboxId:4a122c746631620910156883f1740ba10c499cfc897e451cfc2427a0999aa6b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e,State:CONTAINER_RUNNING,CreatedAt:1705375506831982707,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-xmlxl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8b58db1-b7c3-4712-b34d-3bb3b260231e,},Annotations:map[string]string{io.kubernetes.container.hash: 83ed866,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37c30b77651418cd3deb88f7ca6bd15031c94048fac5312f7e740edc6f451646,PodSandboxId:dc4d0ad8682b8bde8305f16a3558b2d81dca7fe5d8fa63c0fe1fed9b4ad3d390,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705375504664281597,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: ed27080f-9b99-4e19-9103-c2668a2821dd,},Annotations:map[string]string{io.kubernetes.container.hash: 43e8334,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed56790f6c2343cd2de6f1e5d907fec31abf91bfb9374a7bcdf1646f3c16712e,PodSandboxId:dc4d0ad8682b8bde8305f16a3558b2d81dca7fe5d8fa63c0fe1fed9b4ad3d390,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1705375503933656407,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: ed27080f-9b99-4e19-9103-c2668a2821dd,},Annotations:map[string]string{io.kubernetes.container.hash: 43e8334,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d071a43ed7c1deba833cc80b7b55408d8195432b3a1bc27ebbdc38e9dc7db74,PodSandboxId:ec14f84c5c95a9496a9a62cb1441b98a31e57e7b63079341880ded10d3f0126d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:64a04a34b31fdf10b4c7fe9ff006dab818489a318115cfb284010d04e2888386,State:CONTAINER_RUNNING,CreatedAt:1705375503659985669,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tsn82,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44643
e28-4e07-4551-bb4e-339b66ff612e,},Annotations:map[string]string{io.kubernetes.container.hash: f6ed73b8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7ce9af61596ca4fee2fc85f7203ea939d2364132c293d4534e9e07f5f9b3432,PodSandboxId:034ee43ef3710f80929977914a17d44af0b8c833a58269fd5feb69d99fb1960e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5,State:CONTAINER_RUNNING,CreatedAt:1705375496213247332,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-281181,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85e8ae3e0fca2b109e954f9e83d56e0f,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 1cc4fd5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23db95503f362a5fd9be0e7c470a2c2be1c85f26a60fe10ab3218b05e6be5bb5,PodSandboxId:c926b9479efa1c4b735b43b96665f5ab9630800d6928777326b38be901371898,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:77905fafb19047cc426efb6062757e244dc1615a93b0915792f598642229d891,State:CONTAINER_RUNNING,CreatedAt:1705375495938297351,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-281181,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1085cd8619781808be4719f4dd2659c,
},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dddefaeb67c06f5d7437b770d9816c3ca246256f6d15e961c997c1d7e22cb627,PodSandboxId:5edd8ff49c7daa3302d4acca8fbf53bc7e40af3b08014858a17ce62b83f271b6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:378509dd1111937ca2791cf4c4814bc0647714e2ab2f4fc15396707ad1a987a2,State:CONTAINER_RUNNING,CreatedAt:1705375495701539420,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-281181,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad0a8b424ba2d12a8bd1f29b90eb828c,},Annotations:
map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8daaf79763554fb2b395201588ed594570e03a5823b40431b837268c14b853f,PodSandboxId:df4d473399bc44856f3369df4a88801f6694199ba8da6f414c0b3bdda5c3245b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:4b6a3a220cb91e496cff56f267968c4bbb19d8593c21137d99b9bc735cb64857,State:CONTAINER_RUNNING,CreatedAt:1705375495775045754,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-281181,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3358f12c7f62e24f96e4d92e880e2c0,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 2d446c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a2b079dc-fea8-439a-93dc-981140018322 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	dad5796593726       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   11 seconds ago      Running             coredns                   1                   4a122c7466316       coredns-6d4b75cb6d-xmlxl
	37c30b7765141       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 seconds ago      Running             storage-provisioner       3                   dc4d0ad8682b8       storage-provisioner
	ed56790f6c234       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 seconds ago      Exited              storage-provisioner       2                   dc4d0ad8682b8       storage-provisioner
	7d071a43ed7c1       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   15 seconds ago      Running             kube-proxy                1                   ec14f84c5c95a       kube-proxy-tsn82
	f7ce9af61596c       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   22 seconds ago      Running             etcd                      1                   034ee43ef3710       etcd-test-preload-281181
	23db95503f362       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   22 seconds ago      Running             kube-controller-manager   1                   c926b9479efa1       kube-controller-manager-test-preload-281181
	c8daaf7976355       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   22 seconds ago      Running             kube-apiserver            1                   df4d473399bc4       kube-apiserver-test-preload-281181
	dddefaeb67c06       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   23 seconds ago      Running             kube-scheduler            1                   5edd8ff49c7da       kube-scheduler-test-preload-281181
	
	
	==> coredns [dad57965937261b155c9d9c8e8022e21cf89ed13605e910f042002707d44aac4] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:40456 - 19681 "HINFO IN 914484951171207063.7534078246020702216. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.00844325s
	
	
	==> describe nodes <==
	Name:               test-preload-281181
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-281181
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e8fa5f64d0e7272be43ff25ed3826261f0a2578
	                    minikube.k8s.io/name=test-preload-281181
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_16T03_23_25_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Jan 2024 03:23:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-281181
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Jan 2024 03:25:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Jan 2024 03:25:12 +0000   Tue, 16 Jan 2024 03:23:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Jan 2024 03:25:12 +0000   Tue, 16 Jan 2024 03:23:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Jan 2024 03:25:12 +0000   Tue, 16 Jan 2024 03:23:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Jan 2024 03:25:12 +0000   Tue, 16 Jan 2024 03:25:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.102
	  Hostname:    test-preload-281181
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 bfed5a7fff1e4dbd8ce30ab9335fb799
	  System UUID:                bfed5a7f-ff1e-4dbd-8ce3-0ab9335fb799
	  Boot ID:                    028823d6-1963-46f1-a1ae-0afb1efc994d
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-xmlxl                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     99s
	  kube-system                 etcd-test-preload-281181                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         115s
	  kube-system                 kube-apiserver-test-preload-281181             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         115s
	  kube-system                 kube-controller-manager-test-preload-281181    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         113s
	  kube-system                 kube-proxy-tsn82                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         99s
	  kube-system                 kube-scheduler-test-preload-281181             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         115s
	  kube-system                 storage-provisioner                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         97s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 14s                  kube-proxy       
	  Normal  Starting                 95s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m3s (x4 over 2m3s)  kubelet          Node test-preload-281181 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m3s (x4 over 2m3s)  kubelet          Node test-preload-281181 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m3s (x3 over 2m3s)  kubelet          Node test-preload-281181 status is now: NodeHasSufficientPID
	  Normal  Starting                 113s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  113s                 kubelet          Node test-preload-281181 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    113s                 kubelet          Node test-preload-281181 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     113s                 kubelet          Node test-preload-281181 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  113s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                103s                 kubelet          Node test-preload-281181 status is now: NodeReady
	  Normal  RegisteredNode           100s                 node-controller  Node test-preload-281181 event: Registered Node test-preload-281181 in Controller
	  Normal  Starting                 24s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  24s (x8 over 24s)    kubelet          Node test-preload-281181 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24s (x8 over 24s)    kubelet          Node test-preload-281181 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24s (x7 over 24s)    kubelet          Node test-preload-281181 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4s                   node-controller  Node test-preload-281181 event: Registered Node test-preload-281181 in Controller
	
	
	==> dmesg <==
	[Jan16 03:24] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.068924] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.415610] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.579419] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.140454] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.453643] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.751407] systemd-fstab-generator[625]: Ignoring "noauto" for root device
	[  +0.108417] systemd-fstab-generator[636]: Ignoring "noauto" for root device
	[  +0.140660] systemd-fstab-generator[649]: Ignoring "noauto" for root device
	[  +0.101360] systemd-fstab-generator[660]: Ignoring "noauto" for root device
	[  +0.253040] systemd-fstab-generator[684]: Ignoring "noauto" for root device
	[ +26.724757] systemd-fstab-generator[1081]: Ignoring "noauto" for root device
	[Jan16 03:25] kauditd_printk_skb: 7 callbacks suppressed
	[ +10.987217] kauditd_printk_skb: 13 callbacks suppressed
	
	
	==> etcd [f7ce9af61596ca4fee2fc85f7203ea939d2364132c293d4534e9e07f5f9b3432] <==
	{"level":"info","ts":"2024-01-16T03:24:58.135Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"6b93c4bc4617b0fe","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-01-16T03:24:58.138Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-01-16T03:24:58.138Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b93c4bc4617b0fe switched to configuration voters=(7751755696543609086)"}
	{"level":"info","ts":"2024-01-16T03:24:58.138Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"1cdd3ec65c5f94ba","local-member-id":"6b93c4bc4617b0fe","added-peer-id":"6b93c4bc4617b0fe","added-peer-peer-urls":["https://192.168.39.102:2380"]}
	{"level":"info","ts":"2024-01-16T03:24:58.138Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1cdd3ec65c5f94ba","local-member-id":"6b93c4bc4617b0fe","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-16T03:24:58.138Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-16T03:24:58.139Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-01-16T03:24:58.139Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"6b93c4bc4617b0fe","initial-advertise-peer-urls":["https://192.168.39.102:2380"],"listen-peer-urls":["https://192.168.39.102:2380"],"advertise-client-urls":["https://192.168.39.102:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.102:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-01-16T03:24:58.139Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-01-16T03:24:58.139Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.102:2380"}
	{"level":"info","ts":"2024-01-16T03:24:58.139Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.102:2380"}
	{"level":"info","ts":"2024-01-16T03:24:59.216Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b93c4bc4617b0fe is starting a new election at term 2"}
	{"level":"info","ts":"2024-01-16T03:24:59.216Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b93c4bc4617b0fe became pre-candidate at term 2"}
	{"level":"info","ts":"2024-01-16T03:24:59.216Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b93c4bc4617b0fe received MsgPreVoteResp from 6b93c4bc4617b0fe at term 2"}
	{"level":"info","ts":"2024-01-16T03:24:59.216Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b93c4bc4617b0fe became candidate at term 3"}
	{"level":"info","ts":"2024-01-16T03:24:59.216Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b93c4bc4617b0fe received MsgVoteResp from 6b93c4bc4617b0fe at term 3"}
	{"level":"info","ts":"2024-01-16T03:24:59.216Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b93c4bc4617b0fe became leader at term 3"}
	{"level":"info","ts":"2024-01-16T03:24:59.216Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6b93c4bc4617b0fe elected leader 6b93c4bc4617b0fe at term 3"}
	{"level":"info","ts":"2024-01-16T03:24:59.216Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"6b93c4bc4617b0fe","local-member-attributes":"{Name:test-preload-281181 ClientURLs:[https://192.168.39.102:2379]}","request-path":"/0/members/6b93c4bc4617b0fe/attributes","cluster-id":"1cdd3ec65c5f94ba","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-16T03:24:59.216Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-16T03:24:59.217Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-16T03:24:59.218Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.102:2379"}
	{"level":"info","ts":"2024-01-16T03:24:59.219Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-16T03:24:59.219Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-16T03:24:59.219Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 03:25:18 up 1 min,  0 users,  load average: 0.99, 0.31, 0.11
	Linux test-preload-281181 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [c8daaf79763554fb2b395201588ed594570e03a5823b40431b837268c14b853f] <==
	I0116 03:25:01.873667       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0116 03:25:01.873677       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0116 03:25:01.873702       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0116 03:25:01.924363       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0116 03:25:01.874135       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0116 03:25:01.924414       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	E0116 03:25:01.963232       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0116 03:25:02.009009       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0116 03:25:02.010129       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0116 03:25:02.010352       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0116 03:25:02.021585       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0116 03:25:02.024295       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0116 03:25:02.028621       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0116 03:25:02.033191       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0116 03:25:02.042667       1 cache.go:39] Caches are synced for autoregister controller
	I0116 03:25:02.427532       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0116 03:25:02.845648       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0116 03:25:03.299495       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0116 03:25:03.311240       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0116 03:25:03.374176       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0116 03:25:03.422076       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0116 03:25:03.435483       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0116 03:25:04.022253       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0116 03:25:14.977117       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0116 03:25:15.122275       1 controller.go:611] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [23db95503f362a5fd9be0e7c470a2c2be1c85f26a60fe10ab3218b05e6be5bb5] <==
	I0116 03:25:14.965358       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0116 03:25:14.965747       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0116 03:25:14.966350       1 event.go:294] "Event occurred" object="test-preload-281181" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-281181 event: Registered Node test-preload-281181 in Controller"
	I0116 03:25:14.967212       1 shared_informer.go:262] Caches are synced for GC
	I0116 03:25:14.970245       1 shared_informer.go:262] Caches are synced for endpoint
	I0116 03:25:14.970462       1 shared_informer.go:262] Caches are synced for stateful set
	I0116 03:25:14.970528       1 shared_informer.go:262] Caches are synced for persistent volume
	I0116 03:25:14.971367       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I0116 03:25:14.972450       1 shared_informer.go:262] Caches are synced for namespace
	I0116 03:25:14.972702       1 shared_informer.go:262] Caches are synced for expand
	I0116 03:25:14.972885       1 shared_informer.go:262] Caches are synced for deployment
	I0116 03:25:14.975028       1 shared_informer.go:262] Caches are synced for TTL after finished
	I0116 03:25:14.977418       1 shared_informer.go:262] Caches are synced for disruption
	I0116 03:25:14.977468       1 disruption.go:371] Sending events to api server.
	I0116 03:25:15.044232       1 shared_informer.go:262] Caches are synced for cronjob
	I0116 03:25:15.080171       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
	I0116 03:25:15.081562       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I0116 03:25:15.081645       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0116 03:25:15.081765       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
	I0116 03:25:15.106505       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I0116 03:25:15.128816       1 shared_informer.go:262] Caches are synced for resource quota
	I0116 03:25:15.171234       1 shared_informer.go:262] Caches are synced for resource quota
	I0116 03:25:15.581886       1 shared_informer.go:262] Caches are synced for garbage collector
	I0116 03:25:15.582064       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0116 03:25:15.616155       1 shared_informer.go:262] Caches are synced for garbage collector
	
	
	==> kube-proxy [7d071a43ed7c1deba833cc80b7b55408d8195432b3a1bc27ebbdc38e9dc7db74] <==
	I0116 03:25:03.856319       1 node.go:163] Successfully retrieved node IP: 192.168.39.102
	I0116 03:25:03.856416       1 server_others.go:138] "Detected node IP" address="192.168.39.102"
	I0116 03:25:03.856444       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0116 03:25:04.010006       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0116 03:25:04.010405       1 server_others.go:206] "Using iptables Proxier"
	I0116 03:25:04.010552       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0116 03:25:04.015308       1 server.go:661] "Version info" version="v1.24.4"
	I0116 03:25:04.015477       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0116 03:25:04.018210       1 config.go:444] "Starting node config controller"
	I0116 03:25:04.018300       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0116 03:25:04.022183       1 config.go:317] "Starting service config controller"
	I0116 03:25:04.022225       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0116 03:25:04.022245       1 config.go:226] "Starting endpoint slice config controller"
	I0116 03:25:04.022249       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0116 03:25:04.120715       1 shared_informer.go:262] Caches are synced for node config
	I0116 03:25:04.122739       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0116 03:25:04.122796       1 shared_informer.go:262] Caches are synced for service config
	
	
	==> kube-scheduler [dddefaeb67c06f5d7437b770d9816c3ca246256f6d15e961c997c1d7e22cb627] <==
	I0116 03:24:58.367255       1 serving.go:348] Generated self-signed cert in-memory
	W0116 03:25:01.863468       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0116 03:25:01.864200       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0116 03:25:01.864433       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0116 03:25:01.864691       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0116 03:25:01.968759       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0116 03:25:01.968852       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0116 03:25:01.981262       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0116 03:25:01.984655       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0116 03:25:01.984700       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0116 03:25:01.984727       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0116 03:25:02.085240       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-01-16 03:24:19 UTC, ends at Tue 2024-01-16 03:25:19 UTC. --
	Jan 16 03:25:01 test-preload-281181 kubelet[1087]: I0116 03:25:01.997633    1087 kubelet_node_status.go:108] "Node was previously registered" node="test-preload-281181"
	Jan 16 03:25:01 test-preload-281181 kubelet[1087]: I0116 03:25:01.998176    1087 kubelet_node_status.go:73] "Successfully registered node" node="test-preload-281181"
	Jan 16 03:25:02 test-preload-281181 kubelet[1087]: I0116 03:25:02.016713    1087 setters.go:532] "Node became not ready" node="test-preload-281181" condition={Type:Ready Status:False LastHeartbeatTime:2024-01-16 03:25:02.016629069 +0000 UTC m=+7.706156694 LastTransitionTime:2024-01-16 03:25:02.016629069 +0000 UTC m=+7.706156694 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?}
	Jan 16 03:25:02 test-preload-281181 kubelet[1087]: I0116 03:25:02.450483    1087 apiserver.go:52] "Watching apiserver"
	Jan 16 03:25:02 test-preload-281181 kubelet[1087]: I0116 03:25:02.454688    1087 topology_manager.go:200] "Topology Admit Handler"
	Jan 16 03:25:02 test-preload-281181 kubelet[1087]: I0116 03:25:02.454785    1087 topology_manager.go:200] "Topology Admit Handler"
	Jan 16 03:25:02 test-preload-281181 kubelet[1087]: I0116 03:25:02.454817    1087 topology_manager.go:200] "Topology Admit Handler"
	Jan 16 03:25:02 test-preload-281181 kubelet[1087]: E0116 03:25:02.457456    1087 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-xmlxl" podUID=b8b58db1-b7c3-4712-b34d-3bb3b260231e
	Jan 16 03:25:02 test-preload-281181 kubelet[1087]: I0116 03:25:02.551807    1087 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfjdl\" (UniqueName: \"kubernetes.io/projected/44643e28-4e07-4551-bb4e-339b66ff612e-kube-api-access-qfjdl\") pod \"kube-proxy-tsn82\" (UID: \"44643e28-4e07-4551-bb4e-339b66ff612e\") " pod="kube-system/kube-proxy-tsn82"
	Jan 16 03:25:02 test-preload-281181 kubelet[1087]: I0116 03:25:02.551950    1087 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94kls\" (UniqueName: \"kubernetes.io/projected/b8b58db1-b7c3-4712-b34d-3bb3b260231e-kube-api-access-94kls\") pod \"coredns-6d4b75cb6d-xmlxl\" (UID: \"b8b58db1-b7c3-4712-b34d-3bb3b260231e\") " pod="kube-system/coredns-6d4b75cb6d-xmlxl"
	Jan 16 03:25:02 test-preload-281181 kubelet[1087]: I0116 03:25:02.551982    1087 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvqvc\" (UniqueName: \"kubernetes.io/projected/ed27080f-9b99-4e19-9103-c2668a2821dd-kube-api-access-fvqvc\") pod \"storage-provisioner\" (UID: \"ed27080f-9b99-4e19-9103-c2668a2821dd\") " pod="kube-system/storage-provisioner"
	Jan 16 03:25:02 test-preload-281181 kubelet[1087]: I0116 03:25:02.552002    1087 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/44643e28-4e07-4551-bb4e-339b66ff612e-xtables-lock\") pod \"kube-proxy-tsn82\" (UID: \"44643e28-4e07-4551-bb4e-339b66ff612e\") " pod="kube-system/kube-proxy-tsn82"
	Jan 16 03:25:02 test-preload-281181 kubelet[1087]: I0116 03:25:02.552020    1087 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/44643e28-4e07-4551-bb4e-339b66ff612e-lib-modules\") pod \"kube-proxy-tsn82\" (UID: \"44643e28-4e07-4551-bb4e-339b66ff612e\") " pod="kube-system/kube-proxy-tsn82"
	Jan 16 03:25:02 test-preload-281181 kubelet[1087]: I0116 03:25:02.552039    1087 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ed27080f-9b99-4e19-9103-c2668a2821dd-tmp\") pod \"storage-provisioner\" (UID: \"ed27080f-9b99-4e19-9103-c2668a2821dd\") " pod="kube-system/storage-provisioner"
	Jan 16 03:25:02 test-preload-281181 kubelet[1087]: I0116 03:25:02.552060    1087 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/44643e28-4e07-4551-bb4e-339b66ff612e-kube-proxy\") pod \"kube-proxy-tsn82\" (UID: \"44643e28-4e07-4551-bb4e-339b66ff612e\") " pod="kube-system/kube-proxy-tsn82"
	Jan 16 03:25:02 test-preload-281181 kubelet[1087]: I0116 03:25:02.552089    1087 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b8b58db1-b7c3-4712-b34d-3bb3b260231e-config-volume\") pod \"coredns-6d4b75cb6d-xmlxl\" (UID: \"b8b58db1-b7c3-4712-b34d-3bb3b260231e\") " pod="kube-system/coredns-6d4b75cb6d-xmlxl"
	Jan 16 03:25:02 test-preload-281181 kubelet[1087]: I0116 03:25:02.552107    1087 reconciler.go:159] "Reconciler: start to sync state"
	Jan 16 03:25:02 test-preload-281181 kubelet[1087]: E0116 03:25:02.658063    1087 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jan 16 03:25:02 test-preload-281181 kubelet[1087]: E0116 03:25:02.658284    1087 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/b8b58db1-b7c3-4712-b34d-3bb3b260231e-config-volume podName:b8b58db1-b7c3-4712-b34d-3bb3b260231e nodeName:}" failed. No retries permitted until 2024-01-16 03:25:03.158250797 +0000 UTC m=+8.847778425 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/b8b58db1-b7c3-4712-b34d-3bb3b260231e-config-volume") pod "coredns-6d4b75cb6d-xmlxl" (UID: "b8b58db1-b7c3-4712-b34d-3bb3b260231e") : object "kube-system"/"coredns" not registered
	Jan 16 03:25:03 test-preload-281181 kubelet[1087]: E0116 03:25:03.161440    1087 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jan 16 03:25:03 test-preload-281181 kubelet[1087]: E0116 03:25:03.161501    1087 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/b8b58db1-b7c3-4712-b34d-3bb3b260231e-config-volume podName:b8b58db1-b7c3-4712-b34d-3bb3b260231e nodeName:}" failed. No retries permitted until 2024-01-16 03:25:04.161487318 +0000 UTC m=+9.851014947 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/b8b58db1-b7c3-4712-b34d-3bb3b260231e-config-volume") pod "coredns-6d4b75cb6d-xmlxl" (UID: "b8b58db1-b7c3-4712-b34d-3bb3b260231e") : object "kube-system"/"coredns" not registered
	Jan 16 03:25:03 test-preload-281181 kubelet[1087]: E0116 03:25:03.596023    1087 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-xmlxl" podUID=b8b58db1-b7c3-4712-b34d-3bb3b260231e
	Jan 16 03:25:04 test-preload-281181 kubelet[1087]: E0116 03:25:04.173540    1087 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jan 16 03:25:04 test-preload-281181 kubelet[1087]: E0116 03:25:04.173648    1087 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/b8b58db1-b7c3-4712-b34d-3bb3b260231e-config-volume podName:b8b58db1-b7c3-4712-b34d-3bb3b260231e nodeName:}" failed. No retries permitted until 2024-01-16 03:25:06.173631702 +0000 UTC m=+11.863159335 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/b8b58db1-b7c3-4712-b34d-3bb3b260231e-config-volume") pod "coredns-6d4b75cb6d-xmlxl" (UID: "b8b58db1-b7c3-4712-b34d-3bb3b260231e") : object "kube-system"/"coredns" not registered
	Jan 16 03:25:04 test-preload-281181 kubelet[1087]: I0116 03:25:04.644192    1087 scope.go:110] "RemoveContainer" containerID="ed56790f6c2343cd2de6f1e5d907fec31abf91bfb9374a7bcdf1646f3c16712e"
	
	
	==> storage-provisioner [37c30b77651418cd3deb88f7ca6bd15031c94048fac5312f7e740edc6f451646] <==
	I0116 03:25:04.891698       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0116 03:25:04.902448       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0116 03:25:04.902511       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	
	==> storage-provisioner [ed56790f6c2343cd2de6f1e5d907fec31abf91bfb9374a7bcdf1646f3c16712e] <==
	I0116 03:25:04.059360       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0116 03:25:04.061387       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-281181 -n test-preload-281181
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-281181 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-281181" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-281181
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-281181: (1.015811294s)
--- FAIL: TestPreload (200.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (140.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-615980 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-615980 --alsologtostderr -v=3: exit status 82 (2m1.677501124s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-615980"  ...
	* Stopping node "embed-certs-615980"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0116 03:35:53.662978  505997 out.go:296] Setting OutFile to fd 1 ...
	I0116 03:35:53.663138  505997 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:35:53.663149  505997 out.go:309] Setting ErrFile to fd 2...
	I0116 03:35:53.663156  505997 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:35:53.663418  505997 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17965-468241/.minikube/bin
	I0116 03:35:53.663807  505997 out.go:303] Setting JSON to false
	I0116 03:35:53.663951  505997 mustload.go:65] Loading cluster: embed-certs-615980
	I0116 03:35:53.664450  505997 config.go:182] Loaded profile config "embed-certs-615980": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 03:35:53.664549  505997 profile.go:148] Saving config to /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/embed-certs-615980/config.json ...
	I0116 03:35:53.664717  505997 mustload.go:65] Loading cluster: embed-certs-615980
	I0116 03:35:53.664860  505997 config.go:182] Loaded profile config "embed-certs-615980": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 03:35:53.664896  505997 stop.go:39] StopHost: embed-certs-615980
	I0116 03:35:53.665400  505997 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:35:53.665496  505997 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:35:53.682385  505997 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43237
	I0116 03:35:53.683039  505997 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:35:53.683865  505997 main.go:141] libmachine: Using API Version  1
	I0116 03:35:53.683900  505997 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:35:53.684348  505997 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:35:53.687113  505997 out.go:177] * Stopping node "embed-certs-615980"  ...
	I0116 03:35:53.689121  505997 main.go:141] libmachine: Stopping "embed-certs-615980"...
	I0116 03:35:53.689144  505997 main.go:141] libmachine: (embed-certs-615980) Calling .GetState
	I0116 03:35:53.691329  505997 main.go:141] libmachine: (embed-certs-615980) Calling .Stop
	I0116 03:35:53.696142  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 0/60
	I0116 03:35:54.697849  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 1/60
	I0116 03:35:55.699774  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 2/60
	I0116 03:35:56.701857  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 3/60
	I0116 03:35:57.703764  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 4/60
	I0116 03:35:58.705688  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 5/60
	I0116 03:35:59.707015  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 6/60
	I0116 03:36:00.708668  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 7/60
	I0116 03:36:01.710737  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 8/60
	I0116 03:36:02.712191  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 9/60
	I0116 03:36:03.713759  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 10/60
	I0116 03:36:04.715167  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 11/60
	I0116 03:36:05.717369  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 12/60
	I0116 03:36:06.719088  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 13/60
	I0116 03:36:07.720560  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 14/60
	I0116 03:36:08.722717  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 15/60
	I0116 03:36:09.724336  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 16/60
	I0116 03:36:10.725788  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 17/60
	I0116 03:36:11.727395  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 18/60
	I0116 03:36:12.729091  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 19/60
	I0116 03:36:13.731382  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 20/60
	I0116 03:36:14.732918  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 21/60
	I0116 03:36:15.734911  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 22/60
	I0116 03:36:16.736340  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 23/60
	I0116 03:36:17.738019  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 24/60
	I0116 03:36:18.740241  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 25/60
	I0116 03:36:19.741937  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 26/60
	I0116 03:36:20.743591  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 27/60
	I0116 03:36:21.745659  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 28/60
	I0116 03:36:22.747409  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 29/60
	I0116 03:36:23.750095  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 30/60
	I0116 03:36:24.751733  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 31/60
	I0116 03:36:25.753354  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 32/60
	I0116 03:36:26.754992  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 33/60
	I0116 03:36:27.756647  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 34/60
	I0116 03:36:28.758910  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 35/60
	I0116 03:36:29.760786  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 36/60
	I0116 03:36:30.762352  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 37/60
	I0116 03:36:31.763861  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 38/60
	I0116 03:36:32.765501  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 39/60
	I0116 03:36:33.767974  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 40/60
	I0116 03:36:34.769453  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 41/60
	I0116 03:36:35.770946  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 42/60
	I0116 03:36:36.772758  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 43/60
	I0116 03:36:37.774545  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 44/60
	I0116 03:36:38.777141  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 45/60
	I0116 03:36:39.778822  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 46/60
	I0116 03:36:40.780462  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 47/60
	I0116 03:36:41.782855  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 48/60
	I0116 03:36:42.784426  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 49/60
	I0116 03:36:43.785998  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 50/60
	I0116 03:36:44.787430  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 51/60
	I0116 03:36:45.789124  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 52/60
	I0116 03:36:46.790945  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 53/60
	I0116 03:36:47.792607  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 54/60
	I0116 03:36:48.795293  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 55/60
	I0116 03:36:49.796991  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 56/60
	I0116 03:36:50.798676  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 57/60
	I0116 03:36:51.800327  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 58/60
	I0116 03:36:52.802757  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 59/60
	I0116 03:36:53.803985  505997 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0116 03:36:53.804111  505997 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0116 03:36:53.804144  505997 retry.go:31] will retry after 1.315352512s: Temporary Error: stop: unable to stop vm, current state "Running"
	I0116 03:36:55.120608  505997 stop.go:39] StopHost: embed-certs-615980
	I0116 03:36:55.121055  505997 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:36:55.121114  505997 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:36:55.136386  505997 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39537
	I0116 03:36:55.136960  505997 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:36:55.137613  505997 main.go:141] libmachine: Using API Version  1
	I0116 03:36:55.137664  505997 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:36:55.138040  505997 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:36:55.140896  505997 out.go:177] * Stopping node "embed-certs-615980"  ...
	I0116 03:36:55.142681  505997 main.go:141] libmachine: Stopping "embed-certs-615980"...
	I0116 03:36:55.142704  505997 main.go:141] libmachine: (embed-certs-615980) Calling .GetState
	I0116 03:36:55.144606  505997 main.go:141] libmachine: (embed-certs-615980) Calling .Stop
	I0116 03:36:55.148419  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 0/60
	I0116 03:36:56.151064  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 1/60
	I0116 03:36:57.152925  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 2/60
	I0116 03:36:58.154514  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 3/60
	I0116 03:36:59.156321  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 4/60
	I0116 03:37:00.157846  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 5/60
	I0116 03:37:01.159258  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 6/60
	I0116 03:37:02.160868  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 7/60
	I0116 03:37:03.162434  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 8/60
	I0116 03:37:04.164009  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 9/60
	I0116 03:37:05.166224  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 10/60
	I0116 03:37:06.167803  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 11/60
	I0116 03:37:07.169358  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 12/60
	I0116 03:37:08.171976  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 13/60
	I0116 03:37:09.173426  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 14/60
	I0116 03:37:10.175420  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 15/60
	I0116 03:37:11.176976  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 16/60
	I0116 03:37:12.178718  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 17/60
	I0116 03:37:13.180330  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 18/60
	I0116 03:37:14.182801  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 19/60
	I0116 03:37:15.184978  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 20/60
	I0116 03:37:16.186911  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 21/60
	I0116 03:37:17.188388  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 22/60
	I0116 03:37:18.189919  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 23/60
	I0116 03:37:19.191180  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 24/60
	I0116 03:37:20.193243  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 25/60
	I0116 03:37:21.194922  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 26/60
	I0116 03:37:22.196666  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 27/60
	I0116 03:37:23.198736  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 28/60
	I0116 03:37:24.200525  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 29/60
	I0116 03:37:25.202270  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 30/60
	I0116 03:37:26.204069  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 31/60
	I0116 03:37:27.205700  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 32/60
	I0116 03:37:28.207245  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 33/60
	I0116 03:37:29.208701  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 34/60
	I0116 03:37:30.211233  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 35/60
	I0116 03:37:31.212994  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 36/60
	I0116 03:37:32.214238  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 37/60
	I0116 03:37:33.215760  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 38/60
	I0116 03:37:34.217673  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 39/60
	I0116 03:37:35.220297  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 40/60
	I0116 03:37:36.223132  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 41/60
	I0116 03:37:37.224759  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 42/60
	I0116 03:37:38.226569  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 43/60
	I0116 03:37:39.228119  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 44/60
	I0116 03:37:40.230025  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 45/60
	I0116 03:37:41.231604  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 46/60
	I0116 03:37:42.233177  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 47/60
	I0116 03:37:43.235297  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 48/60
	I0116 03:37:44.236666  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 49/60
	I0116 03:37:45.238983  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 50/60
	I0116 03:37:46.240416  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 51/60
	I0116 03:37:47.242180  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 52/60
	I0116 03:37:48.244290  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 53/60
	I0116 03:37:49.245881  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 54/60
	I0116 03:37:50.248159  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 55/60
	I0116 03:37:51.249844  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 56/60
	I0116 03:37:52.251422  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 57/60
	I0116 03:37:53.252926  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 58/60
	I0116 03:37:54.254544  505997 main.go:141] libmachine: (embed-certs-615980) Waiting for machine to stop 59/60
	I0116 03:37:55.255515  505997 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0116 03:37:55.255619  505997 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0116 03:37:55.258003  505997 out.go:177] 
	W0116 03:37:55.259707  505997 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0116 03:37:55.259728  505997 out.go:239] * 
	* 
	W0116 03:37:55.262968  505997 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0116 03:37:55.265003  505997 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-615980 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-615980 -n embed-certs-615980
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-615980 -n embed-certs-615980: exit status 3 (18.521470939s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0116 03:38:13.788450  506972 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.159:22: connect: no route to host
	E0116 03:38:13.788478  506972 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.159:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-615980" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (140.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.69s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-666547 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-666547 --alsologtostderr -v=3: exit status 82 (2m1.135035081s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-666547"  ...
	* Stopping node "no-preload-666547"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0116 03:35:58.520104  506075 out.go:296] Setting OutFile to fd 1 ...
	I0116 03:35:58.520338  506075 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:35:58.520350  506075 out.go:309] Setting ErrFile to fd 2...
	I0116 03:35:58.520358  506075 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:35:58.520653  506075 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17965-468241/.minikube/bin
	I0116 03:35:58.521042  506075 out.go:303] Setting JSON to false
	I0116 03:35:58.521170  506075 mustload.go:65] Loading cluster: no-preload-666547
	I0116 03:35:58.521734  506075 config.go:182] Loaded profile config "no-preload-666547": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0116 03:35:58.521856  506075 profile.go:148] Saving config to /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/no-preload-666547/config.json ...
	I0116 03:35:58.522697  506075 mustload.go:65] Loading cluster: no-preload-666547
	I0116 03:35:58.522930  506075 config.go:182] Loaded profile config "no-preload-666547": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0116 03:35:58.522989  506075 stop.go:39] StopHost: no-preload-666547
	I0116 03:35:58.523686  506075 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:35:58.523784  506075 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:35:58.540864  506075 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45879
	I0116 03:35:58.541418  506075 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:35:58.542234  506075 main.go:141] libmachine: Using API Version  1
	I0116 03:35:58.542266  506075 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:35:58.542704  506075 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:35:58.546778  506075 out.go:177] * Stopping node "no-preload-666547"  ...
	I0116 03:35:58.548525  506075 main.go:141] libmachine: Stopping "no-preload-666547"...
	I0116 03:35:58.548558  506075 main.go:141] libmachine: (no-preload-666547) Calling .GetState
	I0116 03:35:58.551250  506075 main.go:141] libmachine: (no-preload-666547) Calling .Stop
	I0116 03:35:58.555368  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 0/60
	I0116 03:35:59.557225  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 1/60
	I0116 03:36:00.558961  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 2/60
	I0116 03:36:01.560589  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 3/60
	I0116 03:36:02.562746  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 4/60
	I0116 03:36:03.564678  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 5/60
	I0116 03:36:04.566095  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 6/60
	I0116 03:36:05.567885  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 7/60
	I0116 03:36:06.569331  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 8/60
	I0116 03:36:07.570829  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 9/60
	I0116 03:36:08.573069  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 10/60
	I0116 03:36:09.574599  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 11/60
	I0116 03:36:10.575910  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 12/60
	I0116 03:36:11.577526  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 13/60
	I0116 03:36:12.579599  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 14/60
	I0116 03:36:13.581355  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 15/60
	I0116 03:36:14.583404  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 16/60
	I0116 03:36:15.585080  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 17/60
	I0116 03:36:16.586713  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 18/60
	I0116 03:36:17.588503  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 19/60
	I0116 03:36:18.591367  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 20/60
	I0116 03:36:19.592960  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 21/60
	I0116 03:36:20.594632  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 22/60
	I0116 03:36:21.596143  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 23/60
	I0116 03:36:22.597617  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 24/60
	I0116 03:36:23.599861  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 25/60
	I0116 03:36:24.601646  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 26/60
	I0116 03:36:25.603236  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 27/60
	I0116 03:36:26.605975  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 28/60
	I0116 03:36:27.608359  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 29/60
	I0116 03:36:28.609909  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 30/60
	I0116 03:36:29.611921  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 31/60
	I0116 03:36:30.613644  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 32/60
	I0116 03:36:31.615251  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 33/60
	I0116 03:36:32.617005  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 34/60
	I0116 03:36:33.619193  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 35/60
	I0116 03:36:34.620674  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 36/60
	I0116 03:36:35.622334  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 37/60
	I0116 03:36:36.623819  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 38/60
	I0116 03:36:37.626344  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 39/60
	I0116 03:36:38.628132  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 40/60
	I0116 03:36:39.629664  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 41/60
	I0116 03:36:40.631340  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 42/60
	I0116 03:36:41.632958  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 43/60
	I0116 03:36:42.634552  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 44/60
	I0116 03:36:43.637027  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 45/60
	I0116 03:36:44.638808  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 46/60
	I0116 03:36:45.640344  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 47/60
	I0116 03:36:46.641947  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 48/60
	I0116 03:36:47.643816  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 49/60
	I0116 03:36:48.646559  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 50/60
	I0116 03:36:49.648195  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 51/60
	I0116 03:36:50.649899  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 52/60
	I0116 03:36:51.651494  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 53/60
	I0116 03:36:52.653214  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 54/60
	I0116 03:36:53.655442  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 55/60
	I0116 03:36:54.657111  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 56/60
	I0116 03:36:55.658914  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 57/60
	I0116 03:36:56.660799  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 58/60
	I0116 03:36:57.662787  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 59/60
	I0116 03:36:58.663447  506075 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0116 03:36:58.663530  506075 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0116 03:36:58.663566  506075 retry.go:31] will retry after 777.320276ms: Temporary Error: stop: unable to stop vm, current state "Running"
	I0116 03:36:59.441460  506075 stop.go:39] StopHost: no-preload-666547
	I0116 03:36:59.441866  506075 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:36:59.441921  506075 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:36:59.457358  506075 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39333
	I0116 03:36:59.457894  506075 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:36:59.458460  506075 main.go:141] libmachine: Using API Version  1
	I0116 03:36:59.458495  506075 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:36:59.458819  506075 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:36:59.461235  506075 out.go:177] * Stopping node "no-preload-666547"  ...
	I0116 03:36:59.463020  506075 main.go:141] libmachine: Stopping "no-preload-666547"...
	I0116 03:36:59.463045  506075 main.go:141] libmachine: (no-preload-666547) Calling .GetState
	I0116 03:36:59.465035  506075 main.go:141] libmachine: (no-preload-666547) Calling .Stop
	I0116 03:36:59.469111  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 0/60
	I0116 03:37:00.470734  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 1/60
	I0116 03:37:01.473138  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 2/60
	I0116 03:37:02.474755  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 3/60
	I0116 03:37:03.476360  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 4/60
	I0116 03:37:04.478349  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 5/60
	I0116 03:37:05.480931  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 6/60
	I0116 03:37:06.482594  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 7/60
	I0116 03:37:07.484376  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 8/60
	I0116 03:37:08.486087  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 9/60
	I0116 03:37:09.488337  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 10/60
	I0116 03:37:10.490149  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 11/60
	I0116 03:37:11.491639  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 12/60
	I0116 03:37:12.493221  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 13/60
	I0116 03:37:13.494763  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 14/60
	I0116 03:37:14.496789  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 15/60
	I0116 03:37:15.498526  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 16/60
	I0116 03:37:16.499945  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 17/60
	I0116 03:37:17.501689  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 18/60
	I0116 03:37:18.503321  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 19/60
	I0116 03:37:19.505097  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 20/60
	I0116 03:37:20.506723  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 21/60
	I0116 03:37:21.508301  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 22/60
	I0116 03:37:22.509894  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 23/60
	I0116 03:37:23.511489  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 24/60
	I0116 03:37:24.513426  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 25/60
	I0116 03:37:25.514905  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 26/60
	I0116 03:37:26.516654  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 27/60
	I0116 03:37:27.519148  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 28/60
	I0116 03:37:28.520830  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 29/60
	I0116 03:37:29.522743  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 30/60
	I0116 03:37:30.524155  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 31/60
	I0116 03:37:31.525608  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 32/60
	I0116 03:37:32.527111  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 33/60
	I0116 03:37:33.528486  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 34/60
	I0116 03:37:34.530787  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 35/60
	I0116 03:37:35.532323  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 36/60
	I0116 03:37:36.533787  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 37/60
	I0116 03:37:37.535409  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 38/60
	I0116 03:37:38.536896  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 39/60
	I0116 03:37:39.539552  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 40/60
	I0116 03:37:40.541224  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 41/60
	I0116 03:37:41.543009  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 42/60
	I0116 03:37:42.544473  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 43/60
	I0116 03:37:43.546153  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 44/60
	I0116 03:37:44.548800  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 45/60
	I0116 03:37:45.550274  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 46/60
	I0116 03:37:46.551703  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 47/60
	I0116 03:37:47.553184  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 48/60
	I0116 03:37:48.555373  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 49/60
	I0116 03:37:49.557041  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 50/60
	I0116 03:37:50.558617  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 51/60
	I0116 03:37:51.560243  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 52/60
	I0116 03:37:52.561868  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 53/60
	I0116 03:37:53.563556  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 54/60
	I0116 03:37:54.565629  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 55/60
	I0116 03:37:55.567282  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 56/60
	I0116 03:37:56.568896  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 57/60
	I0116 03:37:57.570472  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 58/60
	I0116 03:37:58.571955  506075 main.go:141] libmachine: (no-preload-666547) Waiting for machine to stop 59/60
	I0116 03:37:59.573355  506075 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0116 03:37:59.573414  506075 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0116 03:37:59.575731  506075 out.go:177] 
	W0116 03:37:59.577268  506075 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0116 03:37:59.577288  506075 out.go:239] * 
	* 
	W0116 03:37:59.580469  506075 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0116 03:37:59.582285  506075 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-666547 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-666547 -n no-preload-666547
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-666547 -n no-preload-666547: exit status 3 (18.555501574s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0116 03:38:18.140489  507013 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.103:22: connect: no route to host
	E0116 03:38:18.140508  507013 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.103:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-666547" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.69s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (139.98s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-696770 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p old-k8s-version-696770 --alsologtostderr -v=3: exit status 82 (2m1.331555806s)

                                                
                                                
-- stdout --
	* Stopping node "old-k8s-version-696770"  ...
	* Stopping node "old-k8s-version-696770"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0116 03:36:17.180365  506203 out.go:296] Setting OutFile to fd 1 ...
	I0116 03:36:17.180640  506203 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:36:17.180651  506203 out.go:309] Setting ErrFile to fd 2...
	I0116 03:36:17.180656  506203 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:36:17.180891  506203 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17965-468241/.minikube/bin
	I0116 03:36:17.181204  506203 out.go:303] Setting JSON to false
	I0116 03:36:17.181312  506203 mustload.go:65] Loading cluster: old-k8s-version-696770
	I0116 03:36:17.181692  506203 config.go:182] Loaded profile config "old-k8s-version-696770": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0116 03:36:17.181778  506203 profile.go:148] Saving config to /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/old-k8s-version-696770/config.json ...
	I0116 03:36:17.181977  506203 mustload.go:65] Loading cluster: old-k8s-version-696770
	I0116 03:36:17.182121  506203 config.go:182] Loaded profile config "old-k8s-version-696770": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0116 03:36:17.182158  506203 stop.go:39] StopHost: old-k8s-version-696770
	I0116 03:36:17.182671  506203 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:36:17.182742  506203 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:36:17.198366  506203 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43803
	I0116 03:36:17.198897  506203 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:36:17.199608  506203 main.go:141] libmachine: Using API Version  1
	I0116 03:36:17.199639  506203 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:36:17.200014  506203 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:36:17.202731  506203 out.go:177] * Stopping node "old-k8s-version-696770"  ...
	I0116 03:36:17.204382  506203 main.go:141] libmachine: Stopping "old-k8s-version-696770"...
	I0116 03:36:17.204408  506203 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetState
	I0116 03:36:17.206418  506203 main.go:141] libmachine: (old-k8s-version-696770) Calling .Stop
	I0116 03:36:17.209903  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 0/60
	I0116 03:36:18.211476  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 1/60
	I0116 03:36:19.213090  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 2/60
	I0116 03:36:20.214577  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 3/60
	I0116 03:36:21.216484  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 4/60
	I0116 03:36:22.218874  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 5/60
	I0116 03:36:23.220563  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 6/60
	I0116 03:36:24.222422  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 7/60
	I0116 03:36:25.224118  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 8/60
	I0116 03:36:26.225712  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 9/60
	I0116 03:36:27.227556  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 10/60
	I0116 03:36:28.555908  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 11/60
	I0116 03:36:29.559150  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 12/60
	I0116 03:36:30.560739  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 13/60
	I0116 03:36:31.562711  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 14/60
	I0116 03:36:32.565436  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 15/60
	I0116 03:36:33.567001  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 16/60
	I0116 03:36:34.568529  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 17/60
	I0116 03:36:35.570163  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 18/60
	I0116 03:36:36.571502  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 19/60
	I0116 03:36:37.573103  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 20/60
	I0116 03:36:38.575030  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 21/60
	I0116 03:36:39.576887  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 22/60
	I0116 03:36:40.578529  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 23/60
	I0116 03:36:41.580361  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 24/60
	I0116 03:36:42.582691  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 25/60
	I0116 03:36:43.584385  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 26/60
	I0116 03:36:44.587112  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 27/60
	I0116 03:36:45.588680  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 28/60
	I0116 03:36:46.590512  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 29/60
	I0116 03:36:47.592996  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 30/60
	I0116 03:36:48.594740  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 31/60
	I0116 03:36:49.596723  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 32/60
	I0116 03:36:50.598375  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 33/60
	I0116 03:36:51.599918  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 34/60
	I0116 03:36:52.601770  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 35/60
	I0116 03:36:53.603021  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 36/60
	I0116 03:36:54.604746  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 37/60
	I0116 03:36:55.606807  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 38/60
	I0116 03:36:56.608440  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 39/60
	I0116 03:36:57.611064  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 40/60
	I0116 03:36:58.612938  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 41/60
	I0116 03:36:59.614696  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 42/60
	I0116 03:37:00.616329  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 43/60
	I0116 03:37:01.619007  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 44/60
	I0116 03:37:02.621056  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 45/60
	I0116 03:37:03.623093  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 46/60
	I0116 03:37:04.625211  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 47/60
	I0116 03:37:05.626771  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 48/60
	I0116 03:37:06.628382  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 49/60
	I0116 03:37:07.630936  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 50/60
	I0116 03:37:08.632637  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 51/60
	I0116 03:37:09.634154  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 52/60
	I0116 03:37:10.635683  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 53/60
	I0116 03:37:11.637371  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 54/60
	I0116 03:37:12.639593  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 55/60
	I0116 03:37:13.641587  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 56/60
	I0116 03:37:14.643260  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 57/60
	I0116 03:37:15.644578  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 58/60
	I0116 03:37:16.646275  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 59/60
	I0116 03:37:17.646851  506203 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0116 03:37:17.646906  506203 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0116 03:37:17.646929  506203 retry.go:31] will retry after 646.213467ms: Temporary Error: stop: unable to stop vm, current state "Running"
	I0116 03:37:18.293769  506203 stop.go:39] StopHost: old-k8s-version-696770
	I0116 03:37:18.294195  506203 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:37:18.294258  506203 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:37:18.310060  506203 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34615
	I0116 03:37:18.310591  506203 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:37:18.311248  506203 main.go:141] libmachine: Using API Version  1
	I0116 03:37:18.311282  506203 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:37:18.311758  506203 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:37:18.314118  506203 out.go:177] * Stopping node "old-k8s-version-696770"  ...
	I0116 03:37:18.315456  506203 main.go:141] libmachine: Stopping "old-k8s-version-696770"...
	I0116 03:37:18.315472  506203 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetState
	I0116 03:37:18.317191  506203 main.go:141] libmachine: (old-k8s-version-696770) Calling .Stop
	I0116 03:37:18.320859  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 0/60
	I0116 03:37:19.322286  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 1/60
	I0116 03:37:20.323794  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 2/60
	I0116 03:37:21.325504  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 3/60
	I0116 03:37:22.327288  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 4/60
	I0116 03:37:23.329120  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 5/60
	I0116 03:37:24.330788  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 6/60
	I0116 03:37:25.332874  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 7/60
	I0116 03:37:26.334360  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 8/60
	I0116 03:37:27.335681  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 9/60
	I0116 03:37:28.337808  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 10/60
	I0116 03:37:29.339728  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 11/60
	I0116 03:37:30.341487  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 12/60
	I0116 03:37:31.343137  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 13/60
	I0116 03:37:32.344502  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 14/60
	I0116 03:37:33.346606  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 15/60
	I0116 03:37:34.348354  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 16/60
	I0116 03:37:35.349790  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 17/60
	I0116 03:37:36.351256  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 18/60
	I0116 03:37:37.353600  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 19/60
	I0116 03:37:38.355652  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 20/60
	I0116 03:37:39.357108  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 21/60
	I0116 03:37:40.358594  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 22/60
	I0116 03:37:41.360093  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 23/60
	I0116 03:37:42.361599  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 24/60
	I0116 03:37:43.363616  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 25/60
	I0116 03:37:44.365157  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 26/60
	I0116 03:37:45.366797  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 27/60
	I0116 03:37:46.368570  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 28/60
	I0116 03:37:47.370041  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 29/60
	I0116 03:37:48.372154  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 30/60
	I0116 03:37:49.373619  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 31/60
	I0116 03:37:50.375239  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 32/60
	I0116 03:37:51.376811  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 33/60
	I0116 03:37:52.378372  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 34/60
	I0116 03:37:53.380340  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 35/60
	I0116 03:37:54.382027  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 36/60
	I0116 03:37:55.384141  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 37/60
	I0116 03:37:56.385472  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 38/60
	I0116 03:37:57.386922  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 39/60
	I0116 03:37:58.389087  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 40/60
	I0116 03:37:59.390489  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 41/60
	I0116 03:38:00.392093  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 42/60
	I0116 03:38:01.393507  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 43/60
	I0116 03:38:02.395059  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 44/60
	I0116 03:38:03.397053  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 45/60
	I0116 03:38:04.398387  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 46/60
	I0116 03:38:05.399821  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 47/60
	I0116 03:38:06.401260  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 48/60
	I0116 03:38:07.402869  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 49/60
	I0116 03:38:08.404913  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 50/60
	I0116 03:38:09.406423  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 51/60
	I0116 03:38:10.407972  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 52/60
	I0116 03:38:11.409593  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 53/60
	I0116 03:38:12.411102  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 54/60
	I0116 03:38:13.413165  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 55/60
	I0116 03:38:14.414741  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 56/60
	I0116 03:38:15.416324  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 57/60
	I0116 03:38:16.418506  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 58/60
	I0116 03:38:17.420071  506203 main.go:141] libmachine: (old-k8s-version-696770) Waiting for machine to stop 59/60
	I0116 03:38:18.421074  506203 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0116 03:38:18.421134  506203 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0116 03:38:18.423350  506203 out.go:177] 
	W0116 03:38:18.424882  506203 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0116 03:38:18.424910  506203 out.go:239] * 
	* 
	W0116 03:38:18.428104  506203 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0116 03:38:18.430829  506203 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p old-k8s-version-696770 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-696770 -n old-k8s-version-696770
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-696770 -n old-k8s-version-696770: exit status 3 (18.649814271s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0116 03:38:37.084396  507169 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.167:22: connect: no route to host
	E0116 03:38:37.084418  507169 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.167:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "old-k8s-version-696770" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Stop (139.98s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.91s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-434445 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-434445 --alsologtostderr -v=3: exit status 82 (2m1.233520833s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-434445"  ...
	* Stopping node "default-k8s-diff-port-434445"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0116 03:37:44.283634  506932 out.go:296] Setting OutFile to fd 1 ...
	I0116 03:37:44.283813  506932 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:37:44.283825  506932 out.go:309] Setting ErrFile to fd 2...
	I0116 03:37:44.283830  506932 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:37:44.284084  506932 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17965-468241/.minikube/bin
	I0116 03:37:44.284377  506932 out.go:303] Setting JSON to false
	I0116 03:37:44.284470  506932 mustload.go:65] Loading cluster: default-k8s-diff-port-434445
	I0116 03:37:44.284843  506932 config.go:182] Loaded profile config "default-k8s-diff-port-434445": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 03:37:44.284919  506932 profile.go:148] Saving config to /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/default-k8s-diff-port-434445/config.json ...
	I0116 03:37:44.285087  506932 mustload.go:65] Loading cluster: default-k8s-diff-port-434445
	I0116 03:37:44.285200  506932 config.go:182] Loaded profile config "default-k8s-diff-port-434445": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 03:37:44.285240  506932 stop.go:39] StopHost: default-k8s-diff-port-434445
	I0116 03:37:44.285745  506932 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:37:44.285802  506932 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:37:44.300522  506932 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44263
	I0116 03:37:44.301035  506932 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:37:44.301668  506932 main.go:141] libmachine: Using API Version  1
	I0116 03:37:44.301697  506932 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:37:44.302080  506932 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:37:44.305536  506932 out.go:177] * Stopping node "default-k8s-diff-port-434445"  ...
	I0116 03:37:44.307171  506932 main.go:141] libmachine: Stopping "default-k8s-diff-port-434445"...
	I0116 03:37:44.307195  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetState
	I0116 03:37:44.309167  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .Stop
	I0116 03:37:44.313142  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 0/60
	I0116 03:37:45.314998  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 1/60
	I0116 03:37:46.316553  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 2/60
	I0116 03:37:47.317997  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 3/60
	I0116 03:37:48.319390  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 4/60
	I0116 03:37:49.321763  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 5/60
	I0116 03:37:50.323325  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 6/60
	I0116 03:37:51.325209  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 7/60
	I0116 03:37:52.326545  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 8/60
	I0116 03:37:53.327864  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 9/60
	I0116 03:37:54.329566  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 10/60
	I0116 03:37:55.331163  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 11/60
	I0116 03:37:56.332775  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 12/60
	I0116 03:37:57.334255  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 13/60
	I0116 03:37:58.335698  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 14/60
	I0116 03:37:59.338173  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 15/60
	I0116 03:38:00.339875  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 16/60
	I0116 03:38:01.341623  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 17/60
	I0116 03:38:02.343327  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 18/60
	I0116 03:38:03.344970  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 19/60
	I0116 03:38:04.346507  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 20/60
	I0116 03:38:05.347912  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 21/60
	I0116 03:38:06.349707  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 22/60
	I0116 03:38:07.351132  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 23/60
	I0116 03:38:08.352715  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 24/60
	I0116 03:38:09.355031  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 25/60
	I0116 03:38:10.356480  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 26/60
	I0116 03:38:11.358340  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 27/60
	I0116 03:38:12.359902  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 28/60
	I0116 03:38:13.361401  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 29/60
	I0116 03:38:14.363372  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 30/60
	I0116 03:38:15.365171  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 31/60
	I0116 03:38:16.366886  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 32/60
	I0116 03:38:17.368356  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 33/60
	I0116 03:38:18.369727  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 34/60
	I0116 03:38:19.372158  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 35/60
	I0116 03:38:20.373512  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 36/60
	I0116 03:38:21.375402  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 37/60
	I0116 03:38:22.376969  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 38/60
	I0116 03:38:23.378427  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 39/60
	I0116 03:38:24.379786  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 40/60
	I0116 03:38:25.381172  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 41/60
	I0116 03:38:26.382893  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 42/60
	I0116 03:38:27.384484  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 43/60
	I0116 03:38:28.386079  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 44/60
	I0116 03:38:29.388778  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 45/60
	I0116 03:38:30.390285  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 46/60
	I0116 03:38:31.392099  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 47/60
	I0116 03:38:32.393704  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 48/60
	I0116 03:38:33.395278  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 49/60
	I0116 03:38:34.396757  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 50/60
	I0116 03:38:35.398793  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 51/60
	I0116 03:38:36.400295  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 52/60
	I0116 03:38:37.401832  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 53/60
	I0116 03:38:38.403355  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 54/60
	I0116 03:38:39.405406  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 55/60
	I0116 03:38:40.407046  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 56/60
	I0116 03:38:41.408550  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 57/60
	I0116 03:38:42.410204  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 58/60
	I0116 03:38:43.411916  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 59/60
	I0116 03:38:44.412622  506932 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0116 03:38:44.412676  506932 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0116 03:38:44.412702  506932 retry.go:31] will retry after 899.838427ms: Temporary Error: stop: unable to stop vm, current state "Running"
	I0116 03:38:45.313224  506932 stop.go:39] StopHost: default-k8s-diff-port-434445
	I0116 03:38:45.313649  506932 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:38:45.313707  506932 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:38:45.328567  506932 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42291
	I0116 03:38:45.329081  506932 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:38:45.329621  506932 main.go:141] libmachine: Using API Version  1
	I0116 03:38:45.329651  506932 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:38:45.329969  506932 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:38:45.332440  506932 out.go:177] * Stopping node "default-k8s-diff-port-434445"  ...
	I0116 03:38:45.334327  506932 main.go:141] libmachine: Stopping "default-k8s-diff-port-434445"...
	I0116 03:38:45.334358  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetState
	I0116 03:38:45.335953  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .Stop
	I0116 03:38:45.339194  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 0/60
	I0116 03:38:46.340817  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 1/60
	I0116 03:38:47.342486  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 2/60
	I0116 03:38:48.344449  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 3/60
	I0116 03:38:49.346225  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 4/60
	I0116 03:38:50.348338  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 5/60
	I0116 03:38:51.349868  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 6/60
	I0116 03:38:52.351464  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 7/60
	I0116 03:38:53.353181  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 8/60
	I0116 03:38:54.354919  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 9/60
	I0116 03:38:55.357239  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 10/60
	I0116 03:38:56.359113  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 11/60
	I0116 03:38:57.360947  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 12/60
	I0116 03:38:58.363024  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 13/60
	I0116 03:38:59.364829  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 14/60
	I0116 03:39:00.367141  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 15/60
	I0116 03:39:01.369108  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 16/60
	I0116 03:39:02.371117  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 17/60
	I0116 03:39:03.372862  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 18/60
	I0116 03:39:04.374669  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 19/60
	I0116 03:39:05.376244  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 20/60
	I0116 03:39:06.377811  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 21/60
	I0116 03:39:07.379405  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 22/60
	I0116 03:39:08.381001  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 23/60
	I0116 03:39:09.382566  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 24/60
	I0116 03:39:10.384401  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 25/60
	I0116 03:39:11.385865  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 26/60
	I0116 03:39:12.387400  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 27/60
	I0116 03:39:13.388996  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 28/60
	I0116 03:39:14.390353  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 29/60
	I0116 03:39:15.392173  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 30/60
	I0116 03:39:16.393731  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 31/60
	I0116 03:39:17.395236  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 32/60
	I0116 03:39:18.396849  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 33/60
	I0116 03:39:19.398414  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 34/60
	I0116 03:39:20.400734  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 35/60
	I0116 03:39:21.402273  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 36/60
	I0116 03:39:22.403733  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 37/60
	I0116 03:39:23.405319  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 38/60
	I0116 03:39:24.406786  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 39/60
	I0116 03:39:25.408731  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 40/60
	I0116 03:39:26.410211  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 41/60
	I0116 03:39:27.411944  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 42/60
	I0116 03:39:28.413737  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 43/60
	I0116 03:39:29.415132  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 44/60
	I0116 03:39:30.416903  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 45/60
	I0116 03:39:31.418430  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 46/60
	I0116 03:39:32.419963  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 47/60
	I0116 03:39:33.421387  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 48/60
	I0116 03:39:34.422721  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 49/60
	I0116 03:39:35.424954  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 50/60
	I0116 03:39:36.426350  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 51/60
	I0116 03:39:37.427899  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 52/60
	I0116 03:39:38.429689  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 53/60
	I0116 03:39:39.431108  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 54/60
	I0116 03:39:40.433260  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 55/60
	I0116 03:39:41.434979  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 56/60
	I0116 03:39:42.436471  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 57/60
	I0116 03:39:43.438091  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 58/60
	I0116 03:39:44.439591  506932 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for machine to stop 59/60
	I0116 03:39:45.440690  506932 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0116 03:39:45.440744  506932 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0116 03:39:45.443142  506932 out.go:177] 
	W0116 03:39:45.444741  506932 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0116 03:39:45.444764  506932 out.go:239] * 
	* 
	W0116 03:39:45.447876  506932 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0116 03:39:45.450301  506932 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-434445 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-434445 -n default-k8s-diff-port-434445
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-434445 -n default-k8s-diff-port-434445: exit status 3 (18.67271601s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0116 03:40:04.124498  507715 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.236:22: connect: no route to host
	E0116 03:40:04.124526  507715 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.236:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-434445" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.91s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-615980 -n embed-certs-615980
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-615980 -n embed-certs-615980: exit status 3 (3.200125043s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0116 03:38:16.988441  507076 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.159:22: connect: no route to host
	E0116 03:38:16.988462  507076 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.159:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-615980 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-615980 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.15511916s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.159:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-615980 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-615980 -n embed-certs-615980
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-615980 -n embed-certs-615980: exit status 3 (3.06035733s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0116 03:38:26.204501  507227 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.159:22: connect: no route to host
	E0116 03:38:26.204525  507227 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.159:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-615980" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-666547 -n no-preload-666547
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-666547 -n no-preload-666547: exit status 3 (3.19988468s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0116 03:38:21.340474  507139 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.103:22: connect: no route to host
	E0116 03:38:21.340514  507139 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.103:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-666547 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-666547 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.154829372s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.103:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-666547 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-666547 -n no-preload-666547
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-666547 -n no-preload-666547: exit status 3 (3.061526442s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0116 03:38:30.556506  507298 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.103:22: connect: no route to host
	E0116 03:38:30.556533  507298 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.103:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-666547" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-696770 -n old-k8s-version-696770
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-696770 -n old-k8s-version-696770: exit status 3 (3.200682622s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0116 03:38:40.284543  507384 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.167:22: connect: no route to host
	E0116 03:38:40.284567  507384 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.167:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-696770 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-696770 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.1543875s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.167:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-696770 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-696770 -n old-k8s-version-696770
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-696770 -n old-k8s-version-696770: exit status 3 (3.061494867s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0116 03:38:49.500480  507460 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.167:22: connect: no route to host
	E0116 03:38:49.500512  507460 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.167:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "old-k8s-version-696770" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (12.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-434445 -n default-k8s-diff-port-434445
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-434445 -n default-k8s-diff-port-434445: exit status 3 (3.199430743s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0116 03:40:07.324536  507789 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.236:22: connect: no route to host
	E0116 03:40:07.324558  507789 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.236:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-434445 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-434445 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.154688025s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.236:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-434445 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-434445 -n default-k8s-diff-port-434445
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-434445 -n default-k8s-diff-port-434445: exit status 3 (3.061837882s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0116 03:40:16.540582  507859 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.236:22: connect: no route to host
	E0116 03:40:16.540605  507859 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.236:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-434445" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (543.56s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-666547 -n no-preload-666547
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-01-16 03:57:32.882797885 +0000 UTC m=+4996.821020505
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-666547 -n no-preload-666547
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-666547 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-666547 logs -n 25: (1.827684875s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p no-preload-666547                                   | no-preload-666547            | jenkins | v1.32.0 | 16 Jan 24 03:33 UTC | 16 Jan 24 03:35 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| ssh     | cert-options-977008 ssh                                | cert-options-977008          | jenkins | v1.32.0 | 16 Jan 24 03:34 UTC | 16 Jan 24 03:34 UTC |
	|         | openssl x509 -text -noout -in                          |                              |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                              |         |         |                     |                     |
	| ssh     | -p cert-options-977008 -- sudo                         | cert-options-977008          | jenkins | v1.32.0 | 16 Jan 24 03:34 UTC | 16 Jan 24 03:34 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-977008                                 | cert-options-977008          | jenkins | v1.32.0 | 16 Jan 24 03:34 UTC | 16 Jan 24 03:34 UTC |
	| start   | -p embed-certs-615980                                  | embed-certs-615980           | jenkins | v1.32.0 | 16 Jan 24 03:34 UTC | 16 Jan 24 03:35 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-690771                              | cert-expiration-690771       | jenkins | v1.32.0 | 16 Jan 24 03:35 UTC | 16 Jan 24 03:36 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-615980            | embed-certs-615980           | jenkins | v1.32.0 | 16 Jan 24 03:35 UTC | 16 Jan 24 03:35 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-615980                                  | embed-certs-615980           | jenkins | v1.32.0 | 16 Jan 24 03:35 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-666547             | no-preload-666547            | jenkins | v1.32.0 | 16 Jan 24 03:35 UTC | 16 Jan 24 03:35 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-666547                                   | no-preload-666547            | jenkins | v1.32.0 | 16 Jan 24 03:35 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-696770        | old-k8s-version-696770       | jenkins | v1.32.0 | 16 Jan 24 03:36 UTC | 16 Jan 24 03:36 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-696770                              | old-k8s-version-696770       | jenkins | v1.32.0 | 16 Jan 24 03:36 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-690771                              | cert-expiration-690771       | jenkins | v1.32.0 | 16 Jan 24 03:36 UTC | 16 Jan 24 03:36 UTC |
	| delete  | -p                                                     | disable-driver-mounts-673948 | jenkins | v1.32.0 | 16 Jan 24 03:36 UTC | 16 Jan 24 03:36 UTC |
	|         | disable-driver-mounts-673948                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-434445 | jenkins | v1.32.0 | 16 Jan 24 03:36 UTC | 16 Jan 24 03:37 UTC |
	|         | default-k8s-diff-port-434445                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-434445  | default-k8s-diff-port-434445 | jenkins | v1.32.0 | 16 Jan 24 03:37 UTC | 16 Jan 24 03:37 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-434445 | jenkins | v1.32.0 | 16 Jan 24 03:37 UTC |                     |
	|         | default-k8s-diff-port-434445                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-615980                 | embed-certs-615980           | jenkins | v1.32.0 | 16 Jan 24 03:38 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-666547                  | no-preload-666547            | jenkins | v1.32.0 | 16 Jan 24 03:38 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-615980                                  | embed-certs-615980           | jenkins | v1.32.0 | 16 Jan 24 03:38 UTC | 16 Jan 24 03:49 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p no-preload-666547                                   | no-preload-666547            | jenkins | v1.32.0 | 16 Jan 24 03:38 UTC | 16 Jan 24 03:48 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-696770             | old-k8s-version-696770       | jenkins | v1.32.0 | 16 Jan 24 03:38 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-696770                              | old-k8s-version-696770       | jenkins | v1.32.0 | 16 Jan 24 03:38 UTC | 16 Jan 24 03:51 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-434445       | default-k8s-diff-port-434445 | jenkins | v1.32.0 | 16 Jan 24 03:40 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-434445 | jenkins | v1.32.0 | 16 Jan 24 03:40 UTC | 16 Jan 24 03:49 UTC |
	|         | default-k8s-diff-port-434445                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/16 03:40:16
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0116 03:40:16.605622  507889 out.go:296] Setting OutFile to fd 1 ...
	I0116 03:40:16.605883  507889 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:40:16.605892  507889 out.go:309] Setting ErrFile to fd 2...
	I0116 03:40:16.605897  507889 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:40:16.606102  507889 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17965-468241/.minikube/bin
	I0116 03:40:16.606721  507889 out.go:303] Setting JSON to false
	I0116 03:40:16.607781  507889 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":15769,"bootTime":1705360648,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0116 03:40:16.607865  507889 start.go:138] virtualization: kvm guest
	I0116 03:40:16.610269  507889 out.go:177] * [default-k8s-diff-port-434445] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0116 03:40:16.611862  507889 out.go:177]   - MINIKUBE_LOCATION=17965
	I0116 03:40:16.611954  507889 notify.go:220] Checking for updates...
	I0116 03:40:16.613306  507889 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 03:40:16.615094  507889 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17965-468241/kubeconfig
	I0116 03:40:16.617044  507889 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17965-468241/.minikube
	I0116 03:40:16.618932  507889 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0116 03:40:16.621159  507889 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0116 03:40:16.623616  507889 config.go:182] Loaded profile config "default-k8s-diff-port-434445": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 03:40:16.624273  507889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:40:16.624363  507889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:40:16.640065  507889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45633
	I0116 03:40:16.640642  507889 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:40:16.641273  507889 main.go:141] libmachine: Using API Version  1
	I0116 03:40:16.641301  507889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:40:16.641696  507889 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:40:16.641901  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .DriverName
	I0116 03:40:16.642227  507889 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 03:40:16.642599  507889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:40:16.642684  507889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:40:16.658198  507889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41877
	I0116 03:40:16.658657  507889 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:40:16.659207  507889 main.go:141] libmachine: Using API Version  1
	I0116 03:40:16.659233  507889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:40:16.659588  507889 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:40:16.659844  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .DriverName
	I0116 03:40:16.698770  507889 out.go:177] * Using the kvm2 driver based on existing profile
	I0116 03:40:16.700307  507889 start.go:298] selected driver: kvm2
	I0116 03:40:16.700330  507889 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-434445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-434445 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.50.236 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 03:40:16.700478  507889 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0116 03:40:16.701296  507889 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 03:40:16.701389  507889 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17965-468241/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0116 03:40:16.717988  507889 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0116 03:40:16.718426  507889 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0116 03:40:16.718515  507889 cni.go:84] Creating CNI manager for ""
	I0116 03:40:16.718532  507889 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:40:16.718547  507889 start_flags.go:321] config:
	{Name:default-k8s-diff-port-434445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-43444
5 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.50.236 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 03:40:16.718765  507889 iso.go:125] acquiring lock: {Name:mk2f3231f0eeeb23816ac363851489181398d1c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 03:40:16.721292  507889 out.go:177] * Starting control plane node default-k8s-diff-port-434445 in cluster default-k8s-diff-port-434445
	I0116 03:40:16.722858  507889 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 03:40:16.722928  507889 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0116 03:40:16.722942  507889 cache.go:56] Caching tarball of preloaded images
	I0116 03:40:16.723044  507889 preload.go:174] Found /home/jenkins/minikube-integration/17965-468241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0116 03:40:16.723057  507889 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0116 03:40:16.723243  507889 profile.go:148] Saving config to /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/default-k8s-diff-port-434445/config.json ...
	I0116 03:40:16.723502  507889 start.go:365] acquiring machines lock for default-k8s-diff-port-434445: {Name:mk901e5fae8c1a578d989b520053c709bdbdcf06 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0116 03:40:22.140399  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:40:25.212385  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:40:31.292386  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:40:34.364375  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:40:40.444398  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:40:43.516372  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:40:49.596388  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:40:52.668394  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:40:58.748342  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:41:01.820436  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:41:07.900338  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:41:10.972410  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:41:17.052384  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:41:20.124427  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:41:26.204371  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:41:29.276361  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:41:35.356391  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:41:38.428383  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:41:44.508324  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:41:47.580377  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:41:53.660360  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:41:56.732377  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:42:02.812345  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:42:05.884406  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:42:11.964398  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:42:15.036469  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:42:21.116391  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:42:24.188397  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:42:30.268400  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:42:33.340416  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:42:39.420405  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:42:42.492396  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:42:48.572396  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:42:51.644367  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:42:57.724419  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:43:00.796427  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:43:03.800669  507339 start.go:369] acquired machines lock for "no-preload-666547" in 4m33.073406767s
	I0116 03:43:03.800732  507339 start.go:96] Skipping create...Using existing machine configuration
	I0116 03:43:03.800744  507339 fix.go:54] fixHost starting: 
	I0116 03:43:03.801330  507339 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:43:03.801381  507339 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:43:03.817309  507339 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46189
	I0116 03:43:03.817841  507339 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:43:03.818376  507339 main.go:141] libmachine: Using API Version  1
	I0116 03:43:03.818403  507339 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:43:03.818801  507339 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:43:03.819049  507339 main.go:141] libmachine: (no-preload-666547) Calling .DriverName
	I0116 03:43:03.819206  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetState
	I0116 03:43:03.821006  507339 fix.go:102] recreateIfNeeded on no-preload-666547: state=Stopped err=<nil>
	I0116 03:43:03.821031  507339 main.go:141] libmachine: (no-preload-666547) Calling .DriverName
	W0116 03:43:03.821210  507339 fix.go:128] unexpected machine state, will restart: <nil>
	I0116 03:43:03.823341  507339 out.go:177] * Restarting existing kvm2 VM for "no-preload-666547" ...
	I0116 03:43:03.824887  507339 main.go:141] libmachine: (no-preload-666547) Calling .Start
	I0116 03:43:03.825070  507339 main.go:141] libmachine: (no-preload-666547) Ensuring networks are active...
	I0116 03:43:03.825806  507339 main.go:141] libmachine: (no-preload-666547) Ensuring network default is active
	I0116 03:43:03.826151  507339 main.go:141] libmachine: (no-preload-666547) Ensuring network mk-no-preload-666547 is active
	I0116 03:43:03.826549  507339 main.go:141] libmachine: (no-preload-666547) Getting domain xml...
	I0116 03:43:03.827209  507339 main.go:141] libmachine: (no-preload-666547) Creating domain...
	I0116 03:43:04.166757  507339 main.go:141] libmachine: (no-preload-666547) Waiting to get IP...
	I0116 03:43:04.167846  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:04.168294  507339 main.go:141] libmachine: (no-preload-666547) DBG | unable to find current IP address of domain no-preload-666547 in network mk-no-preload-666547
	I0116 03:43:04.168400  507339 main.go:141] libmachine: (no-preload-666547) DBG | I0116 03:43:04.168281  508330 retry.go:31] will retry after 236.684347ms: waiting for machine to come up
	I0116 03:43:04.407068  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:04.407590  507339 main.go:141] libmachine: (no-preload-666547) DBG | unable to find current IP address of domain no-preload-666547 in network mk-no-preload-666547
	I0116 03:43:04.407626  507339 main.go:141] libmachine: (no-preload-666547) DBG | I0116 03:43:04.407520  508330 retry.go:31] will retry after 273.512454ms: waiting for machine to come up
	I0116 03:43:04.683173  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:04.683724  507339 main.go:141] libmachine: (no-preload-666547) DBG | unable to find current IP address of domain no-preload-666547 in network mk-no-preload-666547
	I0116 03:43:04.683759  507339 main.go:141] libmachine: (no-preload-666547) DBG | I0116 03:43:04.683652  508330 retry.go:31] will retry after 404.396132ms: waiting for machine to come up
	I0116 03:43:05.089306  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:05.089659  507339 main.go:141] libmachine: (no-preload-666547) DBG | unable to find current IP address of domain no-preload-666547 in network mk-no-preload-666547
	I0116 03:43:05.089687  507339 main.go:141] libmachine: (no-preload-666547) DBG | I0116 03:43:05.089612  508330 retry.go:31] will retry after 373.291662ms: waiting for machine to come up
	I0116 03:43:05.464216  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:05.464745  507339 main.go:141] libmachine: (no-preload-666547) DBG | unable to find current IP address of domain no-preload-666547 in network mk-no-preload-666547
	I0116 03:43:05.464772  507339 main.go:141] libmachine: (no-preload-666547) DBG | I0116 03:43:05.464696  508330 retry.go:31] will retry after 509.048348ms: waiting for machine to come up
	I0116 03:43:03.798483  507257 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 03:43:03.798553  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHHostname
	I0116 03:43:03.800507  507257 machine.go:91] provisioned docker machine in 4m37.39429533s
	I0116 03:43:03.800559  507257 fix.go:56] fixHost completed within 4m37.41769564s
	I0116 03:43:03.800568  507257 start.go:83] releasing machines lock for "embed-certs-615980", held for 4m37.417718822s
	W0116 03:43:03.800599  507257 start.go:694] error starting host: provision: host is not running
	W0116 03:43:03.800747  507257 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0116 03:43:03.800759  507257 start.go:709] Will try again in 5 seconds ...
	I0116 03:43:05.975369  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:05.975831  507339 main.go:141] libmachine: (no-preload-666547) DBG | unable to find current IP address of domain no-preload-666547 in network mk-no-preload-666547
	I0116 03:43:05.975864  507339 main.go:141] libmachine: (no-preload-666547) DBG | I0116 03:43:05.975776  508330 retry.go:31] will retry after 631.077965ms: waiting for machine to come up
	I0116 03:43:06.608722  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:06.609133  507339 main.go:141] libmachine: (no-preload-666547) DBG | unable to find current IP address of domain no-preload-666547 in network mk-no-preload-666547
	I0116 03:43:06.609162  507339 main.go:141] libmachine: (no-preload-666547) DBG | I0116 03:43:06.609074  508330 retry.go:31] will retry after 1.047586363s: waiting for machine to come up
	I0116 03:43:07.658264  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:07.658645  507339 main.go:141] libmachine: (no-preload-666547) DBG | unable to find current IP address of domain no-preload-666547 in network mk-no-preload-666547
	I0116 03:43:07.658696  507339 main.go:141] libmachine: (no-preload-666547) DBG | I0116 03:43:07.658591  508330 retry.go:31] will retry after 1.038644854s: waiting for machine to come up
	I0116 03:43:08.698946  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:08.699384  507339 main.go:141] libmachine: (no-preload-666547) DBG | unable to find current IP address of domain no-preload-666547 in network mk-no-preload-666547
	I0116 03:43:08.699411  507339 main.go:141] libmachine: (no-preload-666547) DBG | I0116 03:43:08.699347  508330 retry.go:31] will retry after 1.362032973s: waiting for machine to come up
	I0116 03:43:10.063269  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:10.063764  507339 main.go:141] libmachine: (no-preload-666547) DBG | unable to find current IP address of domain no-preload-666547 in network mk-no-preload-666547
	I0116 03:43:10.063792  507339 main.go:141] libmachine: (no-preload-666547) DBG | I0116 03:43:10.063714  508330 retry.go:31] will retry after 1.432317286s: waiting for machine to come up
	I0116 03:43:08.802803  507257 start.go:365] acquiring machines lock for embed-certs-615980: {Name:mk901e5fae8c1a578d989b520053c709bdbdcf06 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0116 03:43:11.498235  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:11.498714  507339 main.go:141] libmachine: (no-preload-666547) DBG | unable to find current IP address of domain no-preload-666547 in network mk-no-preload-666547
	I0116 03:43:11.498748  507339 main.go:141] libmachine: (no-preload-666547) DBG | I0116 03:43:11.498650  508330 retry.go:31] will retry after 2.490630326s: waiting for machine to come up
	I0116 03:43:13.991256  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:13.991717  507339 main.go:141] libmachine: (no-preload-666547) DBG | unable to find current IP address of domain no-preload-666547 in network mk-no-preload-666547
	I0116 03:43:13.991752  507339 main.go:141] libmachine: (no-preload-666547) DBG | I0116 03:43:13.991662  508330 retry.go:31] will retry after 3.569049736s: waiting for machine to come up
	I0116 03:43:17.565524  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:17.565893  507339 main.go:141] libmachine: (no-preload-666547) DBG | unable to find current IP address of domain no-preload-666547 in network mk-no-preload-666547
	I0116 03:43:17.565916  507339 main.go:141] libmachine: (no-preload-666547) DBG | I0116 03:43:17.565850  508330 retry.go:31] will retry after 2.875259098s: waiting for machine to come up
	I0116 03:43:20.443998  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:20.444472  507339 main.go:141] libmachine: (no-preload-666547) DBG | unable to find current IP address of domain no-preload-666547 in network mk-no-preload-666547
	I0116 03:43:20.444495  507339 main.go:141] libmachine: (no-preload-666547) DBG | I0116 03:43:20.444438  508330 retry.go:31] will retry after 4.319647454s: waiting for machine to come up
	I0116 03:43:24.765311  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:24.765836  507339 main.go:141] libmachine: (no-preload-666547) Found IP for machine: 192.168.39.103
	I0116 03:43:24.765862  507339 main.go:141] libmachine: (no-preload-666547) Reserving static IP address...
	I0116 03:43:24.765879  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has current primary IP address 192.168.39.103 and MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:24.766413  507339 main.go:141] libmachine: (no-preload-666547) DBG | found host DHCP lease matching {name: "no-preload-666547", mac: "52:54:00:4e:5f:03", ip: "192.168.39.103"} in network mk-no-preload-666547: {Iface:virbr2 ExpiryTime:2024-01-16 04:43:15 +0000 UTC Type:0 Mac:52:54:00:4e:5f:03 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:no-preload-666547 Clientid:01:52:54:00:4e:5f:03}
	I0116 03:43:24.766543  507339 main.go:141] libmachine: (no-preload-666547) Reserved static IP address: 192.168.39.103
	I0116 03:43:24.766575  507339 main.go:141] libmachine: (no-preload-666547) DBG | skip adding static IP to network mk-no-preload-666547 - found existing host DHCP lease matching {name: "no-preload-666547", mac: "52:54:00:4e:5f:03", ip: "192.168.39.103"}
	I0116 03:43:24.766593  507339 main.go:141] libmachine: (no-preload-666547) DBG | Getting to WaitForSSH function...
	I0116 03:43:24.766607  507339 main.go:141] libmachine: (no-preload-666547) Waiting for SSH to be available...
	I0116 03:43:24.768801  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:24.769145  507339 main.go:141] libmachine: (no-preload-666547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:5f:03", ip: ""} in network mk-no-preload-666547: {Iface:virbr2 ExpiryTime:2024-01-16 04:43:15 +0000 UTC Type:0 Mac:52:54:00:4e:5f:03 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:no-preload-666547 Clientid:01:52:54:00:4e:5f:03}
	I0116 03:43:24.769180  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined IP address 192.168.39.103 and MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:24.769392  507339 main.go:141] libmachine: (no-preload-666547) DBG | Using SSH client type: external
	I0116 03:43:24.769446  507339 main.go:141] libmachine: (no-preload-666547) DBG | Using SSH private key: /home/jenkins/minikube-integration/17965-468241/.minikube/machines/no-preload-666547/id_rsa (-rw-------)
	I0116 03:43:24.769490  507339 main.go:141] libmachine: (no-preload-666547) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.103 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17965-468241/.minikube/machines/no-preload-666547/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0116 03:43:24.769539  507339 main.go:141] libmachine: (no-preload-666547) DBG | About to run SSH command:
	I0116 03:43:24.769557  507339 main.go:141] libmachine: (no-preload-666547) DBG | exit 0
	I0116 03:43:24.860928  507339 main.go:141] libmachine: (no-preload-666547) DBG | SSH cmd err, output: <nil>: 
	I0116 03:43:24.861324  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetConfigRaw
	I0116 03:43:24.862217  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetIP
	I0116 03:43:24.865100  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:24.865468  507339 main.go:141] libmachine: (no-preload-666547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:5f:03", ip: ""} in network mk-no-preload-666547: {Iface:virbr2 ExpiryTime:2024-01-16 04:43:15 +0000 UTC Type:0 Mac:52:54:00:4e:5f:03 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:no-preload-666547 Clientid:01:52:54:00:4e:5f:03}
	I0116 03:43:24.865503  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined IP address 192.168.39.103 and MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:24.865804  507339 profile.go:148] Saving config to /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/no-preload-666547/config.json ...
	I0116 03:43:24.866064  507339 machine.go:88] provisioning docker machine ...
	I0116 03:43:24.866091  507339 main.go:141] libmachine: (no-preload-666547) Calling .DriverName
	I0116 03:43:24.866374  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetMachineName
	I0116 03:43:24.866590  507339 buildroot.go:166] provisioning hostname "no-preload-666547"
	I0116 03:43:24.866613  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetMachineName
	I0116 03:43:24.866795  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHHostname
	I0116 03:43:24.869231  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:24.869587  507339 main.go:141] libmachine: (no-preload-666547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:5f:03", ip: ""} in network mk-no-preload-666547: {Iface:virbr2 ExpiryTime:2024-01-16 04:43:15 +0000 UTC Type:0 Mac:52:54:00:4e:5f:03 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:no-preload-666547 Clientid:01:52:54:00:4e:5f:03}
	I0116 03:43:24.869623  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined IP address 192.168.39.103 and MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:24.869778  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHPort
	I0116 03:43:24.870002  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHKeyPath
	I0116 03:43:24.870168  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHKeyPath
	I0116 03:43:24.870304  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHUsername
	I0116 03:43:24.870455  507339 main.go:141] libmachine: Using SSH client type: native
	I0116 03:43:24.870929  507339 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I0116 03:43:24.870949  507339 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-666547 && echo "no-preload-666547" | sudo tee /etc/hostname
	I0116 03:43:25.005390  507339 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-666547
	
	I0116 03:43:25.005425  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHHostname
	I0116 03:43:25.008410  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:25.008801  507339 main.go:141] libmachine: (no-preload-666547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:5f:03", ip: ""} in network mk-no-preload-666547: {Iface:virbr2 ExpiryTime:2024-01-16 04:43:15 +0000 UTC Type:0 Mac:52:54:00:4e:5f:03 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:no-preload-666547 Clientid:01:52:54:00:4e:5f:03}
	I0116 03:43:25.008836  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined IP address 192.168.39.103 and MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:25.009007  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHPort
	I0116 03:43:25.009269  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHKeyPath
	I0116 03:43:25.009432  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHKeyPath
	I0116 03:43:25.009561  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHUsername
	I0116 03:43:25.009722  507339 main.go:141] libmachine: Using SSH client type: native
	I0116 03:43:25.010051  507339 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I0116 03:43:25.010071  507339 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-666547' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-666547/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-666547' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 03:43:25.142889  507339 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 03:43:25.142928  507339 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17965-468241/.minikube CaCertPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17965-468241/.minikube}
	I0116 03:43:25.142950  507339 buildroot.go:174] setting up certificates
	I0116 03:43:25.142963  507339 provision.go:83] configureAuth start
	I0116 03:43:25.142973  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetMachineName
	I0116 03:43:25.143294  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetIP
	I0116 03:43:25.146355  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:25.146746  507339 main.go:141] libmachine: (no-preload-666547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:5f:03", ip: ""} in network mk-no-preload-666547: {Iface:virbr2 ExpiryTime:2024-01-16 04:43:15 +0000 UTC Type:0 Mac:52:54:00:4e:5f:03 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:no-preload-666547 Clientid:01:52:54:00:4e:5f:03}
	I0116 03:43:25.146767  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined IP address 192.168.39.103 and MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:25.147063  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHHostname
	I0116 03:43:25.149867  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:25.150231  507339 main.go:141] libmachine: (no-preload-666547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:5f:03", ip: ""} in network mk-no-preload-666547: {Iface:virbr2 ExpiryTime:2024-01-16 04:43:15 +0000 UTC Type:0 Mac:52:54:00:4e:5f:03 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:no-preload-666547 Clientid:01:52:54:00:4e:5f:03}
	I0116 03:43:25.150260  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined IP address 192.168.39.103 and MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:25.150448  507339 provision.go:138] copyHostCerts
	I0116 03:43:25.150531  507339 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem, removing ...
	I0116 03:43:25.150543  507339 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem
	I0116 03:43:25.150618  507339 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem (1123 bytes)
	I0116 03:43:25.150719  507339 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem, removing ...
	I0116 03:43:25.150729  507339 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem
	I0116 03:43:25.150755  507339 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem (1679 bytes)
	I0116 03:43:25.150815  507339 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem, removing ...
	I0116 03:43:25.150823  507339 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem
	I0116 03:43:25.150843  507339 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem (1078 bytes)
	I0116 03:43:25.150888  507339 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca-key.pem org=jenkins.no-preload-666547 san=[192.168.39.103 192.168.39.103 localhost 127.0.0.1 minikube no-preload-666547]
	I0116 03:43:25.417982  507339 provision.go:172] copyRemoteCerts
	I0116 03:43:25.418059  507339 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 03:43:25.418088  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHHostname
	I0116 03:43:25.420908  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:25.421196  507339 main.go:141] libmachine: (no-preload-666547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:5f:03", ip: ""} in network mk-no-preload-666547: {Iface:virbr2 ExpiryTime:2024-01-16 04:43:15 +0000 UTC Type:0 Mac:52:54:00:4e:5f:03 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:no-preload-666547 Clientid:01:52:54:00:4e:5f:03}
	I0116 03:43:25.421235  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined IP address 192.168.39.103 and MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:25.421372  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHPort
	I0116 03:43:25.421609  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHKeyPath
	I0116 03:43:25.421782  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHUsername
	I0116 03:43:25.421952  507339 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/no-preload-666547/id_rsa Username:docker}
	I0116 03:43:25.509876  507339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0116 03:43:25.534885  507339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0116 03:43:25.562593  507339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0116 03:43:25.590106  507339 provision.go:86] duration metric: configureAuth took 447.124389ms
	I0116 03:43:25.590145  507339 buildroot.go:189] setting minikube options for container-runtime
	I0116 03:43:25.590386  507339 config.go:182] Loaded profile config "no-preload-666547": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0116 03:43:25.590475  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHHostname
	I0116 03:43:25.593695  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:25.594125  507339 main.go:141] libmachine: (no-preload-666547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:5f:03", ip: ""} in network mk-no-preload-666547: {Iface:virbr2 ExpiryTime:2024-01-16 04:43:15 +0000 UTC Type:0 Mac:52:54:00:4e:5f:03 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:no-preload-666547 Clientid:01:52:54:00:4e:5f:03}
	I0116 03:43:25.594182  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined IP address 192.168.39.103 and MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:25.594407  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHPort
	I0116 03:43:25.594661  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHKeyPath
	I0116 03:43:25.594851  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHKeyPath
	I0116 03:43:25.595124  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHUsername
	I0116 03:43:25.595362  507339 main.go:141] libmachine: Using SSH client type: native
	I0116 03:43:25.595735  507339 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I0116 03:43:25.595753  507339 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 03:43:26.177541  507510 start.go:369] acquired machines lock for "old-k8s-version-696770" in 4m36.503560035s
	I0116 03:43:26.177612  507510 start.go:96] Skipping create...Using existing machine configuration
	I0116 03:43:26.177621  507510 fix.go:54] fixHost starting: 
	I0116 03:43:26.178073  507510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:43:26.178115  507510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:43:26.194930  507510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35119
	I0116 03:43:26.195420  507510 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:43:26.195898  507510 main.go:141] libmachine: Using API Version  1
	I0116 03:43:26.195925  507510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:43:26.196303  507510 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:43:26.196517  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .DriverName
	I0116 03:43:26.196797  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetState
	I0116 03:43:26.198728  507510 fix.go:102] recreateIfNeeded on old-k8s-version-696770: state=Stopped err=<nil>
	I0116 03:43:26.198759  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .DriverName
	W0116 03:43:26.198959  507510 fix.go:128] unexpected machine state, will restart: <nil>
	I0116 03:43:26.201929  507510 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-696770" ...
	I0116 03:43:25.916953  507339 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 03:43:25.916987  507339 machine.go:91] provisioned docker machine in 1.05090319s
	I0116 03:43:25.917013  507339 start.go:300] post-start starting for "no-preload-666547" (driver="kvm2")
	I0116 03:43:25.917045  507339 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 03:43:25.917070  507339 main.go:141] libmachine: (no-preload-666547) Calling .DriverName
	I0116 03:43:25.917472  507339 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 03:43:25.917510  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHHostname
	I0116 03:43:25.920700  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:25.921097  507339 main.go:141] libmachine: (no-preload-666547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:5f:03", ip: ""} in network mk-no-preload-666547: {Iface:virbr2 ExpiryTime:2024-01-16 04:43:15 +0000 UTC Type:0 Mac:52:54:00:4e:5f:03 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:no-preload-666547 Clientid:01:52:54:00:4e:5f:03}
	I0116 03:43:25.921132  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined IP address 192.168.39.103 and MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:25.921386  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHPort
	I0116 03:43:25.921663  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHKeyPath
	I0116 03:43:25.921877  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHUsername
	I0116 03:43:25.922086  507339 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/no-preload-666547/id_rsa Username:docker}
	I0116 03:43:26.011987  507339 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 03:43:26.016777  507339 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 03:43:26.016813  507339 filesync.go:126] Scanning /home/jenkins/minikube-integration/17965-468241/.minikube/addons for local assets ...
	I0116 03:43:26.016889  507339 filesync.go:126] Scanning /home/jenkins/minikube-integration/17965-468241/.minikube/files for local assets ...
	I0116 03:43:26.016985  507339 filesync.go:149] local asset: /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem -> 4754782.pem in /etc/ssl/certs
	I0116 03:43:26.017109  507339 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 03:43:26.027303  507339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem --> /etc/ssl/certs/4754782.pem (1708 bytes)
	I0116 03:43:26.051806  507339 start.go:303] post-start completed in 134.758948ms
	I0116 03:43:26.051849  507339 fix.go:56] fixHost completed within 22.25110408s
	I0116 03:43:26.051881  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHHostname
	I0116 03:43:26.055165  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:26.055568  507339 main.go:141] libmachine: (no-preload-666547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:5f:03", ip: ""} in network mk-no-preload-666547: {Iface:virbr2 ExpiryTime:2024-01-16 04:43:15 +0000 UTC Type:0 Mac:52:54:00:4e:5f:03 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:no-preload-666547 Clientid:01:52:54:00:4e:5f:03}
	I0116 03:43:26.055605  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined IP address 192.168.39.103 and MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:26.055763  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHPort
	I0116 03:43:26.055983  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHKeyPath
	I0116 03:43:26.056222  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHKeyPath
	I0116 03:43:26.056407  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHUsername
	I0116 03:43:26.056579  507339 main.go:141] libmachine: Using SSH client type: native
	I0116 03:43:26.056930  507339 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I0116 03:43:26.056948  507339 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0116 03:43:26.177329  507339 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705376606.122912048
	
	I0116 03:43:26.177360  507339 fix.go:206] guest clock: 1705376606.122912048
	I0116 03:43:26.177367  507339 fix.go:219] Guest: 2024-01-16 03:43:26.122912048 +0000 UTC Remote: 2024-01-16 03:43:26.051855053 +0000 UTC m=+295.486361610 (delta=71.056995ms)
	I0116 03:43:26.177424  507339 fix.go:190] guest clock delta is within tolerance: 71.056995ms
	I0116 03:43:26.177430  507339 start.go:83] releasing machines lock for "no-preload-666547", held for 22.376720152s
	I0116 03:43:26.177461  507339 main.go:141] libmachine: (no-preload-666547) Calling .DriverName
	I0116 03:43:26.177761  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetIP
	I0116 03:43:26.180783  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:26.181087  507339 main.go:141] libmachine: (no-preload-666547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:5f:03", ip: ""} in network mk-no-preload-666547: {Iface:virbr2 ExpiryTime:2024-01-16 04:43:15 +0000 UTC Type:0 Mac:52:54:00:4e:5f:03 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:no-preload-666547 Clientid:01:52:54:00:4e:5f:03}
	I0116 03:43:26.181117  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined IP address 192.168.39.103 and MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:26.181281  507339 main.go:141] libmachine: (no-preload-666547) Calling .DriverName
	I0116 03:43:26.181876  507339 main.go:141] libmachine: (no-preload-666547) Calling .DriverName
	I0116 03:43:26.182068  507339 main.go:141] libmachine: (no-preload-666547) Calling .DriverName
	I0116 03:43:26.182154  507339 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 03:43:26.182203  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHHostname
	I0116 03:43:26.182337  507339 ssh_runner.go:195] Run: cat /version.json
	I0116 03:43:26.182366  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHHostname
	I0116 03:43:26.185253  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:26.185403  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:26.185625  507339 main.go:141] libmachine: (no-preload-666547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:5f:03", ip: ""} in network mk-no-preload-666547: {Iface:virbr2 ExpiryTime:2024-01-16 04:43:15 +0000 UTC Type:0 Mac:52:54:00:4e:5f:03 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:no-preload-666547 Clientid:01:52:54:00:4e:5f:03}
	I0116 03:43:26.185655  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined IP address 192.168.39.103 and MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:26.185807  507339 main.go:141] libmachine: (no-preload-666547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:5f:03", ip: ""} in network mk-no-preload-666547: {Iface:virbr2 ExpiryTime:2024-01-16 04:43:15 +0000 UTC Type:0 Mac:52:54:00:4e:5f:03 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:no-preload-666547 Clientid:01:52:54:00:4e:5f:03}
	I0116 03:43:26.185816  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHPort
	I0116 03:43:26.185855  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined IP address 192.168.39.103 and MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:26.185966  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHPort
	I0116 03:43:26.186041  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHKeyPath
	I0116 03:43:26.186137  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHKeyPath
	I0116 03:43:26.186220  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHUsername
	I0116 03:43:26.186306  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHUsername
	I0116 03:43:26.186383  507339 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/no-preload-666547/id_rsa Username:docker}
	I0116 03:43:26.186428  507339 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/no-preload-666547/id_rsa Username:docker}
	I0116 03:43:26.312441  507339 ssh_runner.go:195] Run: systemctl --version
	I0116 03:43:26.319016  507339 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 03:43:26.469427  507339 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0116 03:43:26.475759  507339 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 03:43:26.475896  507339 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 03:43:26.491920  507339 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0116 03:43:26.491952  507339 start.go:475] detecting cgroup driver to use...
	I0116 03:43:26.492112  507339 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 03:43:26.508122  507339 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 03:43:26.523664  507339 docker.go:217] disabling cri-docker service (if available) ...
	I0116 03:43:26.523754  507339 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 03:43:26.540173  507339 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 03:43:26.557370  507339 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 03:43:26.685134  507339 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 03:43:26.806555  507339 docker.go:233] disabling docker service ...
	I0116 03:43:26.806640  507339 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 03:43:26.821910  507339 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 03:43:26.836619  507339 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 03:43:26.950601  507339 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 03:43:27.077586  507339 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 03:43:27.091892  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 03:43:27.111772  507339 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0116 03:43:27.111856  507339 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:43:27.122183  507339 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 03:43:27.122261  507339 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:43:27.132861  507339 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:43:27.144003  507339 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:43:27.154747  507339 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 03:43:27.166236  507339 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 03:43:27.175337  507339 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0116 03:43:27.175410  507339 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0116 03:43:27.190891  507339 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 03:43:27.201216  507339 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 03:43:27.322701  507339 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 03:43:27.504197  507339 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 03:43:27.504292  507339 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 03:43:27.509879  507339 start.go:543] Will wait 60s for crictl version
	I0116 03:43:27.509972  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:43:27.514555  507339 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 03:43:27.556338  507339 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0116 03:43:27.556444  507339 ssh_runner.go:195] Run: crio --version
	I0116 03:43:27.615814  507339 ssh_runner.go:195] Run: crio --version
	I0116 03:43:27.666262  507339 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I0116 03:43:26.203694  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .Start
	I0116 03:43:26.203950  507510 main.go:141] libmachine: (old-k8s-version-696770) Ensuring networks are active...
	I0116 03:43:26.204831  507510 main.go:141] libmachine: (old-k8s-version-696770) Ensuring network default is active
	I0116 03:43:26.205251  507510 main.go:141] libmachine: (old-k8s-version-696770) Ensuring network mk-old-k8s-version-696770 is active
	I0116 03:43:26.205763  507510 main.go:141] libmachine: (old-k8s-version-696770) Getting domain xml...
	I0116 03:43:26.206485  507510 main.go:141] libmachine: (old-k8s-version-696770) Creating domain...
	I0116 03:43:26.558284  507510 main.go:141] libmachine: (old-k8s-version-696770) Waiting to get IP...
	I0116 03:43:26.559270  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:26.559701  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | unable to find current IP address of domain old-k8s-version-696770 in network mk-old-k8s-version-696770
	I0116 03:43:26.559793  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | I0116 03:43:26.559692  508427 retry.go:31] will retry after 243.799089ms: waiting for machine to come up
	I0116 03:43:26.805411  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:26.805914  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | unable to find current IP address of domain old-k8s-version-696770 in network mk-old-k8s-version-696770
	I0116 03:43:26.805948  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | I0116 03:43:26.805846  508427 retry.go:31] will retry after 346.727587ms: waiting for machine to come up
	I0116 03:43:27.154528  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:27.155074  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | unable to find current IP address of domain old-k8s-version-696770 in network mk-old-k8s-version-696770
	I0116 03:43:27.155107  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | I0116 03:43:27.155023  508427 retry.go:31] will retry after 357.633471ms: waiting for machine to come up
	I0116 03:43:27.514870  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:27.515490  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | unable to find current IP address of domain old-k8s-version-696770 in network mk-old-k8s-version-696770
	I0116 03:43:27.515517  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | I0116 03:43:27.515452  508427 retry.go:31] will retry after 582.001218ms: waiting for machine to come up
	I0116 03:43:28.099271  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:28.099783  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | unable to find current IP address of domain old-k8s-version-696770 in network mk-old-k8s-version-696770
	I0116 03:43:28.099817  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | I0116 03:43:28.099735  508427 retry.go:31] will retry after 747.661188ms: waiting for machine to come up
	I0116 03:43:28.849318  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:28.849836  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | unable to find current IP address of domain old-k8s-version-696770 in network mk-old-k8s-version-696770
	I0116 03:43:28.849872  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | I0116 03:43:28.849799  508427 retry.go:31] will retry after 953.610286ms: waiting for machine to come up
	I0116 03:43:27.667889  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetIP
	I0116 03:43:27.671385  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:27.671804  507339 main.go:141] libmachine: (no-preload-666547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:5f:03", ip: ""} in network mk-no-preload-666547: {Iface:virbr2 ExpiryTime:2024-01-16 04:43:15 +0000 UTC Type:0 Mac:52:54:00:4e:5f:03 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:no-preload-666547 Clientid:01:52:54:00:4e:5f:03}
	I0116 03:43:27.671840  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined IP address 192.168.39.103 and MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:27.672113  507339 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0116 03:43:27.676693  507339 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 03:43:27.690701  507339 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0116 03:43:27.690748  507339 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 03:43:27.731189  507339 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0116 03:43:27.731219  507339 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0116 03:43:27.731321  507339 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:43:27.731358  507339 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0116 03:43:27.731370  507339 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0116 03:43:27.731404  507339 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0116 03:43:27.731441  507339 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0116 03:43:27.731352  507339 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0116 03:43:27.731322  507339 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0116 03:43:27.731322  507339 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0116 03:43:27.733105  507339 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0116 03:43:27.733119  507339 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:43:27.733171  507339 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0116 03:43:27.733171  507339 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0116 03:43:27.733110  507339 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0116 03:43:27.733118  507339 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0116 03:43:27.733113  507339 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0116 03:43:27.733270  507339 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0116 03:43:27.900005  507339 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0116 03:43:27.901232  507339 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0116 03:43:27.903964  507339 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0116 03:43:27.907543  507339 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0116 03:43:27.908417  507339 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0116 03:43:27.909137  507339 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0116 03:43:27.953586  507339 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0116 03:43:28.024252  507339 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0116 03:43:28.024310  507339 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0116 03:43:28.024366  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:43:28.042716  507339 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:43:28.078379  507339 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0116 03:43:28.078438  507339 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0116 03:43:28.078503  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:43:28.179590  507339 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0116 03:43:28.179612  507339 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0116 03:43:28.179661  507339 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0116 03:43:28.179661  507339 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0116 03:43:28.179720  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:43:28.179722  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:43:28.179729  507339 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0116 03:43:28.179750  507339 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0116 03:43:28.179785  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:43:28.179804  507339 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0116 03:43:28.179865  507339 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0116 03:43:28.179906  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:43:28.179812  507339 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0116 03:43:28.179950  507339 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0116 03:43:28.179977  507339 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:43:28.180011  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:43:28.180009  507339 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0116 03:43:28.196999  507339 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0116 03:43:28.197021  507339 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0116 03:43:28.197157  507339 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0116 03:43:28.305002  507339 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0116 03:43:28.305117  507339 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0116 03:43:28.305044  507339 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:43:28.305231  507339 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0116 03:43:28.317016  507339 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0116 03:43:28.317149  507339 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0116 03:43:28.346291  507339 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0116 03:43:28.346393  507339 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0116 03:43:28.346434  507339 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0116 03:43:28.346518  507339 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0116 03:43:28.346547  507339 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0116 03:43:28.346598  507339 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0116 03:43:28.346618  507339 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0116 03:43:28.346631  507339 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0116 03:43:28.346650  507339 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0116 03:43:28.425129  507339 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0116 03:43:28.425217  507339 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0116 03:43:28.425319  507339 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0116 03:43:28.425317  507339 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0116 03:43:28.425377  507339 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0116 03:43:28.425391  507339 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0116 03:43:28.425441  507339 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0116 03:43:29.805277  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:29.805642  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | unable to find current IP address of domain old-k8s-version-696770 in network mk-old-k8s-version-696770
	I0116 03:43:29.805677  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | I0116 03:43:29.805586  508427 retry.go:31] will retry after 734.396993ms: waiting for machine to come up
	I0116 03:43:30.541337  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:30.541794  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | unable to find current IP address of domain old-k8s-version-696770 in network mk-old-k8s-version-696770
	I0116 03:43:30.541828  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | I0116 03:43:30.541741  508427 retry.go:31] will retry after 1.035836118s: waiting for machine to come up
	I0116 03:43:31.579576  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:31.580093  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | unable to find current IP address of domain old-k8s-version-696770 in network mk-old-k8s-version-696770
	I0116 03:43:31.580118  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | I0116 03:43:31.580070  508427 retry.go:31] will retry after 1.723172168s: waiting for machine to come up
	I0116 03:43:33.305247  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:33.305726  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | unable to find current IP address of domain old-k8s-version-696770 in network mk-old-k8s-version-696770
	I0116 03:43:33.305759  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | I0116 03:43:33.305669  508427 retry.go:31] will retry after 1.465747661s: waiting for machine to come up
	I0116 03:43:32.396858  507339 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (4.050189724s)
	I0116 03:43:32.396913  507339 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0116 03:43:32.396956  507339 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (3.971489155s)
	I0116 03:43:32.397006  507339 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0116 03:43:32.397028  507339 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (3.971686012s)
	I0116 03:43:32.397043  507339 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0116 03:43:32.397050  507339 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (4.050383438s)
	I0116 03:43:32.397063  507339 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0116 03:43:32.397093  507339 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0116 03:43:32.397172  507339 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0116 03:43:35.381615  507339 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.98440652s)
	I0116 03:43:35.381660  507339 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0116 03:43:35.381699  507339 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0116 03:43:35.381759  507339 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0116 03:43:34.773552  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:34.774149  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | unable to find current IP address of domain old-k8s-version-696770 in network mk-old-k8s-version-696770
	I0116 03:43:34.774182  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | I0116 03:43:34.774084  508427 retry.go:31] will retry after 1.94747868s: waiting for machine to come up
	I0116 03:43:36.722855  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:36.723416  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | unable to find current IP address of domain old-k8s-version-696770 in network mk-old-k8s-version-696770
	I0116 03:43:36.723448  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | I0116 03:43:36.723365  508427 retry.go:31] will retry after 2.550966562s: waiting for machine to come up
	I0116 03:43:39.276082  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:39.276671  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | unable to find current IP address of domain old-k8s-version-696770 in network mk-old-k8s-version-696770
	I0116 03:43:39.276710  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | I0116 03:43:39.276608  508427 retry.go:31] will retry after 3.317854993s: waiting for machine to come up
	I0116 03:43:38.162725  507339 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.780935577s)
	I0116 03:43:38.162760  507339 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0116 03:43:38.162792  507339 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0116 03:43:38.162843  507339 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0116 03:43:39.527575  507339 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.36469752s)
	I0116 03:43:39.527612  507339 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0116 03:43:39.527639  507339 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0116 03:43:39.527696  507339 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0116 03:43:42.595994  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:42.596424  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | unable to find current IP address of domain old-k8s-version-696770 in network mk-old-k8s-version-696770
	I0116 03:43:42.596458  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | I0116 03:43:42.596364  508427 retry.go:31] will retry after 4.913808783s: waiting for machine to come up
	I0116 03:43:41.690968  507339 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.16323953s)
	I0116 03:43:41.691007  507339 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0116 03:43:41.691045  507339 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0116 03:43:41.691100  507339 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0116 03:43:43.849988  507339 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.158855886s)
	I0116 03:43:43.850023  507339 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0116 03:43:43.850052  507339 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0116 03:43:43.850107  507339 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0116 03:43:44.597660  507339 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0116 03:43:44.597710  507339 cache_images.go:123] Successfully loaded all cached images
	I0116 03:43:44.597715  507339 cache_images.go:92] LoadImages completed in 16.866481277s
	I0116 03:43:44.597788  507339 ssh_runner.go:195] Run: crio config
	I0116 03:43:44.658055  507339 cni.go:84] Creating CNI manager for ""
	I0116 03:43:44.658081  507339 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:43:44.658104  507339 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 03:43:44.658124  507339 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.103 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-666547 NodeName:no-preload-666547 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.103"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.103 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0116 03:43:44.658290  507339 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.103
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-666547"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.103
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.103"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 03:43:44.658371  507339 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-666547 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.103
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-666547 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0116 03:43:44.658431  507339 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0116 03:43:44.668859  507339 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 03:43:44.668934  507339 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0116 03:43:44.678543  507339 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I0116 03:43:44.694998  507339 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0116 03:43:44.711256  507339 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I0116 03:43:44.728203  507339 ssh_runner.go:195] Run: grep 192.168.39.103	control-plane.minikube.internal$ /etc/hosts
	I0116 03:43:44.732219  507339 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.103	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 03:43:44.744687  507339 certs.go:56] Setting up /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/no-preload-666547 for IP: 192.168.39.103
	I0116 03:43:44.744730  507339 certs.go:190] acquiring lock for shared ca certs: {Name:mkab5aeea3e0ed69f3592dc8f418e07c6075b130 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:43:44.744957  507339 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17965-468241/.minikube/ca.key
	I0116 03:43:44.745014  507339 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.key
	I0116 03:43:44.745133  507339 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/no-preload-666547/client.key
	I0116 03:43:44.745226  507339 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/no-preload-666547/apiserver.key.f0189397
	I0116 03:43:44.745293  507339 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/no-preload-666547/proxy-client.key
	I0116 03:43:44.745431  507339 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478.pem (1338 bytes)
	W0116 03:43:44.745471  507339 certs.go:433] ignoring /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478_empty.pem, impossibly tiny 0 bytes
	I0116 03:43:44.745488  507339 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca-key.pem (1679 bytes)
	I0116 03:43:44.745541  507339 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem (1078 bytes)
	I0116 03:43:44.745582  507339 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem (1123 bytes)
	I0116 03:43:44.745620  507339 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem (1679 bytes)
	I0116 03:43:44.745687  507339 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem (1708 bytes)
	I0116 03:43:44.746558  507339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/no-preload-666547/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0116 03:43:44.770889  507339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/no-preload-666547/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0116 03:43:44.795150  507339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/no-preload-666547/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0116 03:43:44.818047  507339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/no-preload-666547/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0116 03:43:44.842003  507339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 03:43:44.866125  507339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0116 03:43:44.890235  507339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 03:43:44.913732  507339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 03:43:44.937249  507339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478.pem --> /usr/share/ca-certificates/475478.pem (1338 bytes)
	I0116 03:43:44.961628  507339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem --> /usr/share/ca-certificates/4754782.pem (1708 bytes)
	I0116 03:43:44.986672  507339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 03:43:45.010735  507339 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0116 03:43:45.028537  507339 ssh_runner.go:195] Run: openssl version
	I0116 03:43:45.034910  507339 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/475478.pem && ln -fs /usr/share/ca-certificates/475478.pem /etc/ssl/certs/475478.pem"
	I0116 03:43:45.046034  507339 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/475478.pem
	I0116 03:43:45.050965  507339 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 02:44 /usr/share/ca-certificates/475478.pem
	I0116 03:43:45.051059  507339 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/475478.pem
	I0116 03:43:45.057465  507339 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/475478.pem /etc/ssl/certs/51391683.0"
	I0116 03:43:45.068400  507339 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4754782.pem && ln -fs /usr/share/ca-certificates/4754782.pem /etc/ssl/certs/4754782.pem"
	I0116 03:43:45.079619  507339 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4754782.pem
	I0116 03:43:45.084545  507339 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 02:44 /usr/share/ca-certificates/4754782.pem
	I0116 03:43:45.084622  507339 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4754782.pem
	I0116 03:43:45.090638  507339 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4754782.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 03:43:45.101658  507339 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 03:43:45.113091  507339 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:43:45.118085  507339 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 02:35 /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:43:45.118153  507339 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:43:45.124100  507339 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 03:43:45.135338  507339 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 03:43:45.140230  507339 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0116 03:43:45.146566  507339 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0116 03:43:45.152839  507339 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0116 03:43:45.158917  507339 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0116 03:43:45.164984  507339 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0116 03:43:45.171049  507339 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0116 03:43:45.177547  507339 kubeadm.go:404] StartCluster: {Name:no-preload-666547 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:no-preload-666547 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.103 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 03:43:45.177657  507339 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0116 03:43:45.177719  507339 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 03:43:45.221757  507339 cri.go:89] found id: ""
	I0116 03:43:45.221848  507339 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0116 03:43:45.233811  507339 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0116 03:43:45.233838  507339 kubeadm.go:636] restartCluster start
	I0116 03:43:45.233906  507339 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0116 03:43:45.244810  507339 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:45.245999  507339 kubeconfig.go:92] found "no-preload-666547" server: "https://192.168.39.103:8443"
	I0116 03:43:45.248711  507339 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0116 03:43:45.260979  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:45.261066  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:45.276682  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:48.709239  507889 start.go:369] acquired machines lock for "default-k8s-diff-port-434445" in 3m31.985691976s
	I0116 03:43:48.709311  507889 start.go:96] Skipping create...Using existing machine configuration
	I0116 03:43:48.709333  507889 fix.go:54] fixHost starting: 
	I0116 03:43:48.709815  507889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:43:48.709867  507889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:43:48.726637  507889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45373
	I0116 03:43:48.727122  507889 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:43:48.727702  507889 main.go:141] libmachine: Using API Version  1
	I0116 03:43:48.727737  507889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:43:48.728104  507889 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:43:48.728310  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .DriverName
	I0116 03:43:48.728475  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetState
	I0116 03:43:48.730338  507889 fix.go:102] recreateIfNeeded on default-k8s-diff-port-434445: state=Stopped err=<nil>
	I0116 03:43:48.730361  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .DriverName
	W0116 03:43:48.730545  507889 fix.go:128] unexpected machine state, will restart: <nil>
	I0116 03:43:48.733848  507889 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-434445" ...
	I0116 03:43:47.512288  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:47.512755  507510 main.go:141] libmachine: (old-k8s-version-696770) Found IP for machine: 192.168.61.167
	I0116 03:43:47.512793  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has current primary IP address 192.168.61.167 and MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:47.512804  507510 main.go:141] libmachine: (old-k8s-version-696770) Reserving static IP address...
	I0116 03:43:47.513157  507510 main.go:141] libmachine: (old-k8s-version-696770) Reserved static IP address: 192.168.61.167
	I0116 03:43:47.513194  507510 main.go:141] libmachine: (old-k8s-version-696770) Waiting for SSH to be available...
	I0116 03:43:47.513218  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | found host DHCP lease matching {name: "old-k8s-version-696770", mac: "52:54:00:37:20:1a", ip: "192.168.61.167"} in network mk-old-k8s-version-696770: {Iface:virbr3 ExpiryTime:2024-01-16 04:43:38 +0000 UTC Type:0 Mac:52:54:00:37:20:1a Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:old-k8s-version-696770 Clientid:01:52:54:00:37:20:1a}
	I0116 03:43:47.513242  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | skip adding static IP to network mk-old-k8s-version-696770 - found existing host DHCP lease matching {name: "old-k8s-version-696770", mac: "52:54:00:37:20:1a", ip: "192.168.61.167"}
	I0116 03:43:47.513259  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | Getting to WaitForSSH function...
	I0116 03:43:47.515438  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:47.515887  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:20:1a", ip: ""} in network mk-old-k8s-version-696770: {Iface:virbr3 ExpiryTime:2024-01-16 04:43:38 +0000 UTC Type:0 Mac:52:54:00:37:20:1a Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:old-k8s-version-696770 Clientid:01:52:54:00:37:20:1a}
	I0116 03:43:47.515928  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined IP address 192.168.61.167 and MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:47.516089  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | Using SSH client type: external
	I0116 03:43:47.516124  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | Using SSH private key: /home/jenkins/minikube-integration/17965-468241/.minikube/machines/old-k8s-version-696770/id_rsa (-rw-------)
	I0116 03:43:47.516160  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.167 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17965-468241/.minikube/machines/old-k8s-version-696770/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0116 03:43:47.516182  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | About to run SSH command:
	I0116 03:43:47.516203  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | exit 0
	I0116 03:43:47.608193  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | SSH cmd err, output: <nil>: 
	I0116 03:43:47.608599  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetConfigRaw
	I0116 03:43:47.609195  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetIP
	I0116 03:43:47.611633  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:47.612018  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:20:1a", ip: ""} in network mk-old-k8s-version-696770: {Iface:virbr3 ExpiryTime:2024-01-16 04:43:38 +0000 UTC Type:0 Mac:52:54:00:37:20:1a Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:old-k8s-version-696770 Clientid:01:52:54:00:37:20:1a}
	I0116 03:43:47.612068  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined IP address 192.168.61.167 and MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:47.612355  507510 profile.go:148] Saving config to /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/old-k8s-version-696770/config.json ...
	I0116 03:43:47.612601  507510 machine.go:88] provisioning docker machine ...
	I0116 03:43:47.612628  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .DriverName
	I0116 03:43:47.612872  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetMachineName
	I0116 03:43:47.613047  507510 buildroot.go:166] provisioning hostname "old-k8s-version-696770"
	I0116 03:43:47.613068  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetMachineName
	I0116 03:43:47.613195  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHHostname
	I0116 03:43:47.615457  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:47.615901  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:20:1a", ip: ""} in network mk-old-k8s-version-696770: {Iface:virbr3 ExpiryTime:2024-01-16 04:43:38 +0000 UTC Type:0 Mac:52:54:00:37:20:1a Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:old-k8s-version-696770 Clientid:01:52:54:00:37:20:1a}
	I0116 03:43:47.615928  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined IP address 192.168.61.167 and MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:47.616111  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHPort
	I0116 03:43:47.616292  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHKeyPath
	I0116 03:43:47.616489  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHKeyPath
	I0116 03:43:47.616687  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHUsername
	I0116 03:43:47.616889  507510 main.go:141] libmachine: Using SSH client type: native
	I0116 03:43:47.617280  507510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.167 22 <nil> <nil>}
	I0116 03:43:47.617297  507510 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-696770 && echo "old-k8s-version-696770" | sudo tee /etc/hostname
	I0116 03:43:47.745448  507510 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-696770
	
	I0116 03:43:47.745482  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHHostname
	I0116 03:43:47.748812  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:47.749135  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:20:1a", ip: ""} in network mk-old-k8s-version-696770: {Iface:virbr3 ExpiryTime:2024-01-16 04:43:38 +0000 UTC Type:0 Mac:52:54:00:37:20:1a Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:old-k8s-version-696770 Clientid:01:52:54:00:37:20:1a}
	I0116 03:43:47.749171  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined IP address 192.168.61.167 and MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:47.749296  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHPort
	I0116 03:43:47.749525  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHKeyPath
	I0116 03:43:47.749715  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHKeyPath
	I0116 03:43:47.749872  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHUsername
	I0116 03:43:47.750019  507510 main.go:141] libmachine: Using SSH client type: native
	I0116 03:43:47.750339  507510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.167 22 <nil> <nil>}
	I0116 03:43:47.750357  507510 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-696770' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-696770/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-696770' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 03:43:47.876917  507510 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 03:43:47.876957  507510 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17965-468241/.minikube CaCertPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17965-468241/.minikube}
	I0116 03:43:47.877011  507510 buildroot.go:174] setting up certificates
	I0116 03:43:47.877026  507510 provision.go:83] configureAuth start
	I0116 03:43:47.877041  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetMachineName
	I0116 03:43:47.877378  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetIP
	I0116 03:43:47.880453  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:47.880836  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:20:1a", ip: ""} in network mk-old-k8s-version-696770: {Iface:virbr3 ExpiryTime:2024-01-16 04:43:38 +0000 UTC Type:0 Mac:52:54:00:37:20:1a Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:old-k8s-version-696770 Clientid:01:52:54:00:37:20:1a}
	I0116 03:43:47.880869  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined IP address 192.168.61.167 and MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:47.881010  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHHostname
	I0116 03:43:47.883053  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:47.883415  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:20:1a", ip: ""} in network mk-old-k8s-version-696770: {Iface:virbr3 ExpiryTime:2024-01-16 04:43:38 +0000 UTC Type:0 Mac:52:54:00:37:20:1a Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:old-k8s-version-696770 Clientid:01:52:54:00:37:20:1a}
	I0116 03:43:47.883448  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined IP address 192.168.61.167 and MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:47.883635  507510 provision.go:138] copyHostCerts
	I0116 03:43:47.883706  507510 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem, removing ...
	I0116 03:43:47.883717  507510 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem
	I0116 03:43:47.883778  507510 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem (1078 bytes)
	I0116 03:43:47.883864  507510 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem, removing ...
	I0116 03:43:47.883871  507510 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem
	I0116 03:43:47.883893  507510 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem (1123 bytes)
	I0116 03:43:47.883943  507510 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem, removing ...
	I0116 03:43:47.883950  507510 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem
	I0116 03:43:47.883965  507510 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem (1679 bytes)
	I0116 03:43:47.884010  507510 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-696770 san=[192.168.61.167 192.168.61.167 localhost 127.0.0.1 minikube old-k8s-version-696770]
	I0116 03:43:47.946258  507510 provision.go:172] copyRemoteCerts
	I0116 03:43:47.946327  507510 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 03:43:47.946354  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHHostname
	I0116 03:43:47.949417  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:47.949750  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:20:1a", ip: ""} in network mk-old-k8s-version-696770: {Iface:virbr3 ExpiryTime:2024-01-16 04:43:38 +0000 UTC Type:0 Mac:52:54:00:37:20:1a Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:old-k8s-version-696770 Clientid:01:52:54:00:37:20:1a}
	I0116 03:43:47.949784  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined IP address 192.168.61.167 and MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:47.949941  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHPort
	I0116 03:43:47.950180  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHKeyPath
	I0116 03:43:47.950333  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHUsername
	I0116 03:43:47.950478  507510 sshutil.go:53] new ssh client: &{IP:192.168.61.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/old-k8s-version-696770/id_rsa Username:docker}
	I0116 03:43:48.042564  507510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0116 03:43:48.066519  507510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0116 03:43:48.090127  507510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0116 03:43:48.113387  507510 provision.go:86] duration metric: configureAuth took 236.343393ms
	I0116 03:43:48.113428  507510 buildroot.go:189] setting minikube options for container-runtime
	I0116 03:43:48.113662  507510 config.go:182] Loaded profile config "old-k8s-version-696770": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0116 03:43:48.113758  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHHostname
	I0116 03:43:48.116735  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:48.117144  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:20:1a", ip: ""} in network mk-old-k8s-version-696770: {Iface:virbr3 ExpiryTime:2024-01-16 04:43:38 +0000 UTC Type:0 Mac:52:54:00:37:20:1a Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:old-k8s-version-696770 Clientid:01:52:54:00:37:20:1a}
	I0116 03:43:48.117187  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined IP address 192.168.61.167 and MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:48.117328  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHPort
	I0116 03:43:48.117529  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHKeyPath
	I0116 03:43:48.117725  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHKeyPath
	I0116 03:43:48.117892  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHUsername
	I0116 03:43:48.118118  507510 main.go:141] libmachine: Using SSH client type: native
	I0116 03:43:48.118427  507510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.167 22 <nil> <nil>}
	I0116 03:43:48.118450  507510 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 03:43:48.458094  507510 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 03:43:48.458129  507510 machine.go:91] provisioned docker machine in 845.51167ms
	I0116 03:43:48.458141  507510 start.go:300] post-start starting for "old-k8s-version-696770" (driver="kvm2")
	I0116 03:43:48.458153  507510 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 03:43:48.458172  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .DriverName
	I0116 03:43:48.458616  507510 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 03:43:48.458650  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHHostname
	I0116 03:43:48.461476  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:48.461858  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:20:1a", ip: ""} in network mk-old-k8s-version-696770: {Iface:virbr3 ExpiryTime:2024-01-16 04:43:38 +0000 UTC Type:0 Mac:52:54:00:37:20:1a Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:old-k8s-version-696770 Clientid:01:52:54:00:37:20:1a}
	I0116 03:43:48.461908  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined IP address 192.168.61.167 and MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:48.462029  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHPort
	I0116 03:43:48.462272  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHKeyPath
	I0116 03:43:48.462460  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHUsername
	I0116 03:43:48.462643  507510 sshutil.go:53] new ssh client: &{IP:192.168.61.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/old-k8s-version-696770/id_rsa Username:docker}
	I0116 03:43:48.550436  507510 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 03:43:48.555225  507510 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 03:43:48.555261  507510 filesync.go:126] Scanning /home/jenkins/minikube-integration/17965-468241/.minikube/addons for local assets ...
	I0116 03:43:48.555349  507510 filesync.go:126] Scanning /home/jenkins/minikube-integration/17965-468241/.minikube/files for local assets ...
	I0116 03:43:48.555434  507510 filesync.go:149] local asset: /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem -> 4754782.pem in /etc/ssl/certs
	I0116 03:43:48.555560  507510 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 03:43:48.565598  507510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem --> /etc/ssl/certs/4754782.pem (1708 bytes)
	I0116 03:43:48.588611  507510 start.go:303] post-start completed in 130.45305ms
	I0116 03:43:48.588642  507510 fix.go:56] fixHost completed within 22.411021213s
	I0116 03:43:48.588675  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHHostname
	I0116 03:43:48.591220  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:48.591582  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:20:1a", ip: ""} in network mk-old-k8s-version-696770: {Iface:virbr3 ExpiryTime:2024-01-16 04:43:38 +0000 UTC Type:0 Mac:52:54:00:37:20:1a Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:old-k8s-version-696770 Clientid:01:52:54:00:37:20:1a}
	I0116 03:43:48.591618  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined IP address 192.168.61.167 and MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:48.591779  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHPort
	I0116 03:43:48.592014  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHKeyPath
	I0116 03:43:48.592216  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHKeyPath
	I0116 03:43:48.592412  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHUsername
	I0116 03:43:48.592567  507510 main.go:141] libmachine: Using SSH client type: native
	I0116 03:43:48.592933  507510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.167 22 <nil> <nil>}
	I0116 03:43:48.592950  507510 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0116 03:43:48.709079  507510 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705376628.651647278
	
	I0116 03:43:48.709103  507510 fix.go:206] guest clock: 1705376628.651647278
	I0116 03:43:48.709111  507510 fix.go:219] Guest: 2024-01-16 03:43:48.651647278 +0000 UTC Remote: 2024-01-16 03:43:48.588648172 +0000 UTC m=+299.078902394 (delta=62.999106ms)
	I0116 03:43:48.709134  507510 fix.go:190] guest clock delta is within tolerance: 62.999106ms
	I0116 03:43:48.709140  507510 start.go:83] releasing machines lock for "old-k8s-version-696770", held for 22.531556099s
	I0116 03:43:48.709169  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .DriverName
	I0116 03:43:48.709519  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetIP
	I0116 03:43:48.712438  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:48.712770  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:20:1a", ip: ""} in network mk-old-k8s-version-696770: {Iface:virbr3 ExpiryTime:2024-01-16 04:43:38 +0000 UTC Type:0 Mac:52:54:00:37:20:1a Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:old-k8s-version-696770 Clientid:01:52:54:00:37:20:1a}
	I0116 03:43:48.712825  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined IP address 192.168.61.167 and MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:48.712921  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .DriverName
	I0116 03:43:48.713501  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .DriverName
	I0116 03:43:48.713677  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .DriverName
	I0116 03:43:48.713768  507510 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 03:43:48.713816  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHHostname
	I0116 03:43:48.713920  507510 ssh_runner.go:195] Run: cat /version.json
	I0116 03:43:48.713951  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHHostname
	I0116 03:43:48.716415  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:48.716697  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:48.716820  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:20:1a", ip: ""} in network mk-old-k8s-version-696770: {Iface:virbr3 ExpiryTime:2024-01-16 04:43:38 +0000 UTC Type:0 Mac:52:54:00:37:20:1a Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:old-k8s-version-696770 Clientid:01:52:54:00:37:20:1a}
	I0116 03:43:48.716846  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined IP address 192.168.61.167 and MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:48.716995  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHPort
	I0116 03:43:48.717093  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:20:1a", ip: ""} in network mk-old-k8s-version-696770: {Iface:virbr3 ExpiryTime:2024-01-16 04:43:38 +0000 UTC Type:0 Mac:52:54:00:37:20:1a Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:old-k8s-version-696770 Clientid:01:52:54:00:37:20:1a}
	I0116 03:43:48.717123  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined IP address 192.168.61.167 and MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:48.717394  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHPort
	I0116 03:43:48.717402  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHKeyPath
	I0116 03:43:48.717638  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHKeyPath
	I0116 03:43:48.717650  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHUsername
	I0116 03:43:48.717791  507510 sshutil.go:53] new ssh client: &{IP:192.168.61.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/old-k8s-version-696770/id_rsa Username:docker}
	I0116 03:43:48.717824  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHUsername
	I0116 03:43:48.717956  507510 sshutil.go:53] new ssh client: &{IP:192.168.61.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/old-k8s-version-696770/id_rsa Username:docker}
	I0116 03:43:48.838506  507510 ssh_runner.go:195] Run: systemctl --version
	I0116 03:43:48.845152  507510 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 03:43:49.001791  507510 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0116 03:43:49.008474  507510 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 03:43:49.008558  507510 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 03:43:49.024030  507510 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0116 03:43:49.024087  507510 start.go:475] detecting cgroup driver to use...
	I0116 03:43:49.024164  507510 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 03:43:49.038853  507510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 03:43:49.056228  507510 docker.go:217] disabling cri-docker service (if available) ...
	I0116 03:43:49.056308  507510 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 03:43:49.071266  507510 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 03:43:49.085793  507510 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 03:43:49.211294  507510 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 03:43:49.338893  507510 docker.go:233] disabling docker service ...
	I0116 03:43:49.338971  507510 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 03:43:49.354423  507510 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 03:43:49.367355  507510 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 03:43:49.483277  507510 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 03:43:49.593977  507510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 03:43:49.607374  507510 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 03:43:49.626781  507510 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0116 03:43:49.626846  507510 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:43:49.637809  507510 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 03:43:49.637892  507510 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:43:49.648162  507510 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:43:49.658305  507510 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:43:49.669557  507510 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 03:43:49.680190  507510 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 03:43:49.689125  507510 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0116 03:43:49.689199  507510 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0116 03:43:49.703247  507510 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 03:43:49.713826  507510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 03:43:49.829677  507510 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 03:43:50.009393  507510 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 03:43:50.009489  507510 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 03:43:50.016443  507510 start.go:543] Will wait 60s for crictl version
	I0116 03:43:50.016521  507510 ssh_runner.go:195] Run: which crictl
	I0116 03:43:50.020560  507510 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 03:43:50.056652  507510 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0116 03:43:50.056733  507510 ssh_runner.go:195] Run: crio --version
	I0116 03:43:50.104202  507510 ssh_runner.go:195] Run: crio --version
	I0116 03:43:50.150215  507510 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0116 03:43:45.761989  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:45.762077  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:45.776377  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:46.262107  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:46.262205  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:46.274748  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:46.761344  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:46.761434  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:46.773509  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:47.261093  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:47.261222  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:47.272584  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:47.761119  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:47.761204  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:47.773674  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:48.261288  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:48.261448  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:48.273461  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:48.762071  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:48.762205  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:48.778093  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:49.261032  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:49.261139  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:49.273090  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:49.761233  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:49.761348  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:49.773529  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:50.261720  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:50.261822  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:50.277403  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:48.735627  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .Start
	I0116 03:43:48.735865  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Ensuring networks are active...
	I0116 03:43:48.736708  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Ensuring network default is active
	I0116 03:43:48.737105  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Ensuring network mk-default-k8s-diff-port-434445 is active
	I0116 03:43:48.737445  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Getting domain xml...
	I0116 03:43:48.738086  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Creating domain...
	I0116 03:43:49.085479  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting to get IP...
	I0116 03:43:49.086513  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:43:49.086907  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | unable to find current IP address of domain default-k8s-diff-port-434445 in network mk-default-k8s-diff-port-434445
	I0116 03:43:49.086993  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | I0116 03:43:49.086879  508579 retry.go:31] will retry after 251.682416ms: waiting for machine to come up
	I0116 03:43:49.340560  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:43:49.341196  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | unable to find current IP address of domain default-k8s-diff-port-434445 in network mk-default-k8s-diff-port-434445
	I0116 03:43:49.341235  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | I0116 03:43:49.341140  508579 retry.go:31] will retry after 288.322607ms: waiting for machine to come up
	I0116 03:43:49.630920  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:43:49.631449  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | unable to find current IP address of domain default-k8s-diff-port-434445 in network mk-default-k8s-diff-port-434445
	I0116 03:43:49.631478  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | I0116 03:43:49.631404  508579 retry.go:31] will retry after 305.730946ms: waiting for machine to come up
	I0116 03:43:49.938846  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:43:49.939357  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | unable to find current IP address of domain default-k8s-diff-port-434445 in network mk-default-k8s-diff-port-434445
	I0116 03:43:49.939381  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | I0116 03:43:49.939307  508579 retry.go:31] will retry after 431.952943ms: waiting for machine to come up
	I0116 03:43:50.372921  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:43:50.373426  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | unable to find current IP address of domain default-k8s-diff-port-434445 in network mk-default-k8s-diff-port-434445
	I0116 03:43:50.373453  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | I0116 03:43:50.373368  508579 retry.go:31] will retry after 557.336026ms: waiting for machine to come up
	I0116 03:43:50.932300  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:43:50.932902  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | unable to find current IP address of domain default-k8s-diff-port-434445 in network mk-default-k8s-diff-port-434445
	I0116 03:43:50.932933  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | I0116 03:43:50.932837  508579 retry.go:31] will retry after 652.034162ms: waiting for machine to come up
	I0116 03:43:51.586765  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:43:51.587332  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | unable to find current IP address of domain default-k8s-diff-port-434445 in network mk-default-k8s-diff-port-434445
	I0116 03:43:51.587365  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | I0116 03:43:51.587290  508579 retry.go:31] will retry after 1.078418867s: waiting for machine to come up
	I0116 03:43:50.151763  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetIP
	I0116 03:43:50.154861  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:50.155283  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:20:1a", ip: ""} in network mk-old-k8s-version-696770: {Iface:virbr3 ExpiryTime:2024-01-16 04:43:38 +0000 UTC Type:0 Mac:52:54:00:37:20:1a Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:old-k8s-version-696770 Clientid:01:52:54:00:37:20:1a}
	I0116 03:43:50.155331  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined IP address 192.168.61.167 and MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:50.155536  507510 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0116 03:43:50.160159  507510 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 03:43:50.173354  507510 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0116 03:43:50.173416  507510 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 03:43:50.227220  507510 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0116 03:43:50.227308  507510 ssh_runner.go:195] Run: which lz4
	I0116 03:43:50.231565  507510 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0116 03:43:50.236133  507510 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0116 03:43:50.236169  507510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0116 03:43:52.243584  507510 crio.go:444] Took 2.012049 seconds to copy over tarball
	I0116 03:43:52.243686  507510 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0116 03:43:50.761232  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:50.761323  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:50.777877  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:51.261357  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:51.261444  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:51.280624  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:51.761117  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:51.761225  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:51.775076  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:52.261857  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:52.261948  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:52.279844  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:52.761400  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:52.761493  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:52.773869  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:53.261155  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:53.261263  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:53.273774  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:53.761370  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:53.761500  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:53.773900  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:54.262012  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:54.262134  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:54.277928  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:54.761492  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:54.761642  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:54.774531  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:55.261302  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:55.261395  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:55.274178  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:55.274226  507339 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0116 03:43:55.274272  507339 kubeadm.go:1135] stopping kube-system containers ...
	I0116 03:43:55.274293  507339 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0116 03:43:55.274360  507339 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 03:43:55.321847  507339 cri.go:89] found id: ""
	I0116 03:43:55.321943  507339 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0116 03:43:55.339190  507339 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 03:43:55.348548  507339 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 03:43:55.348637  507339 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 03:43:55.358316  507339 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0116 03:43:55.358345  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:43:55.492932  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:43:52.667882  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:43:52.668380  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | unable to find current IP address of domain default-k8s-diff-port-434445 in network mk-default-k8s-diff-port-434445
	I0116 03:43:52.668415  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | I0116 03:43:52.668311  508579 retry.go:31] will retry after 1.052441827s: waiting for machine to come up
	I0116 03:43:53.722859  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:43:53.723473  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | unable to find current IP address of domain default-k8s-diff-port-434445 in network mk-default-k8s-diff-port-434445
	I0116 03:43:53.723503  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | I0116 03:43:53.723429  508579 retry.go:31] will retry after 1.233090848s: waiting for machine to come up
	I0116 03:43:54.958519  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:43:54.958990  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | unable to find current IP address of domain default-k8s-diff-port-434445 in network mk-default-k8s-diff-port-434445
	I0116 03:43:54.959014  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | I0116 03:43:54.958934  508579 retry.go:31] will retry after 2.038449182s: waiting for machine to come up
	I0116 03:43:55.109598  507510 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.865872133s)
	I0116 03:43:55.109637  507510 crio.go:451] Took 2.866019 seconds to extract the tarball
	I0116 03:43:55.109652  507510 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0116 03:43:55.150763  507510 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 03:43:55.206497  507510 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0116 03:43:55.206525  507510 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0116 03:43:55.206597  507510 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:43:55.206619  507510 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0116 03:43:55.206660  507510 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0116 03:43:55.206682  507510 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0116 03:43:55.206601  507510 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0116 03:43:55.206622  507510 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0116 03:43:55.206790  507510 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0116 03:43:55.206801  507510 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0116 03:43:55.208228  507510 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:43:55.208246  507510 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0116 03:43:55.208245  507510 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0116 03:43:55.208247  507510 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0116 03:43:55.208291  507510 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0116 03:43:55.208295  507510 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0116 03:43:55.208291  507510 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0116 03:43:55.208610  507510 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0116 03:43:55.364082  507510 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0116 03:43:55.364096  507510 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0116 03:43:55.367820  507510 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0116 03:43:55.371639  507510 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0116 03:43:55.379423  507510 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0116 03:43:55.383569  507510 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0116 03:43:55.385854  507510 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0116 03:43:55.522241  507510 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:43:55.539971  507510 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0116 03:43:55.540031  507510 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0116 03:43:55.540113  507510 ssh_runner.go:195] Run: which crictl
	I0116 03:43:55.542332  507510 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0116 03:43:55.542389  507510 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0116 03:43:55.542441  507510 ssh_runner.go:195] Run: which crictl
	I0116 03:43:55.565552  507510 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0116 03:43:55.565679  507510 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0116 03:43:55.565761  507510 ssh_runner.go:195] Run: which crictl
	I0116 03:43:55.583839  507510 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0116 03:43:55.583890  507510 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0116 03:43:55.583942  507510 ssh_runner.go:195] Run: which crictl
	I0116 03:43:55.583847  507510 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0116 03:43:55.584023  507510 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0116 03:43:55.584073  507510 ssh_runner.go:195] Run: which crictl
	I0116 03:43:55.596487  507510 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0116 03:43:55.596555  507510 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0116 03:43:55.596619  507510 ssh_runner.go:195] Run: which crictl
	I0116 03:43:55.605042  507510 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0116 03:43:55.605105  507510 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0116 03:43:55.605162  507510 ssh_runner.go:195] Run: which crictl
	I0116 03:43:55.740186  507510 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0116 03:43:55.740225  507510 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0116 03:43:55.740283  507510 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0116 03:43:55.740334  507510 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0116 03:43:55.740384  507510 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0116 03:43:55.740441  507510 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0116 03:43:55.740450  507510 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0116 03:43:55.900542  507510 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0116 03:43:55.906506  507510 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0116 03:43:55.914158  507510 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0116 03:43:55.914171  507510 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0116 03:43:55.926953  507510 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0116 03:43:55.927034  507510 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0116 03:43:55.927137  507510 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0116 03:43:55.927186  507510 cache_images.go:92] LoadImages completed in 720.646435ms
	W0116 03:43:55.927280  507510 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0: no such file or directory
	I0116 03:43:55.927362  507510 ssh_runner.go:195] Run: crio config
	I0116 03:43:55.989408  507510 cni.go:84] Creating CNI manager for ""
	I0116 03:43:55.989440  507510 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:43:55.989468  507510 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 03:43:55.989495  507510 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.167 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-696770 NodeName:old-k8s-version-696770 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.167"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.167 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0116 03:43:55.989657  507510 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.167
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-696770"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.167
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.167"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-696770
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.61.167:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 03:43:55.989757  507510 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-696770 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.167
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-696770 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0116 03:43:55.989819  507510 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0116 03:43:55.999676  507510 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 03:43:55.999766  507510 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0116 03:43:56.009179  507510 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0116 03:43:56.028479  507510 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0116 03:43:56.045979  507510 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2181 bytes)
	I0116 03:43:56.067179  507510 ssh_runner.go:195] Run: grep 192.168.61.167	control-plane.minikube.internal$ /etc/hosts
	I0116 03:43:56.071532  507510 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.167	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 03:43:56.085960  507510 certs.go:56] Setting up /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/old-k8s-version-696770 for IP: 192.168.61.167
	I0116 03:43:56.086006  507510 certs.go:190] acquiring lock for shared ca certs: {Name:mkab5aeea3e0ed69f3592dc8f418e07c6075b130 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:43:56.086216  507510 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17965-468241/.minikube/ca.key
	I0116 03:43:56.086293  507510 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.key
	I0116 03:43:56.086385  507510 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/old-k8s-version-696770/client.key
	I0116 03:43:56.086447  507510 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/old-k8s-version-696770/apiserver.key.1a2d2382
	I0116 03:43:56.086480  507510 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/old-k8s-version-696770/proxy-client.key
	I0116 03:43:56.086668  507510 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478.pem (1338 bytes)
	W0116 03:43:56.086711  507510 certs.go:433] ignoring /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478_empty.pem, impossibly tiny 0 bytes
	I0116 03:43:56.086721  507510 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca-key.pem (1679 bytes)
	I0116 03:43:56.086746  507510 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem (1078 bytes)
	I0116 03:43:56.086772  507510 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem (1123 bytes)
	I0116 03:43:56.086795  507510 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem (1679 bytes)
	I0116 03:43:56.086833  507510 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem (1708 bytes)
	I0116 03:43:56.087557  507510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/old-k8s-version-696770/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0116 03:43:56.118148  507510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/old-k8s-version-696770/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0116 03:43:56.146632  507510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/old-k8s-version-696770/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0116 03:43:56.177146  507510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/old-k8s-version-696770/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0116 03:43:56.208800  507510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 03:43:56.237097  507510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0116 03:43:56.264559  507510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 03:43:56.294383  507510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 03:43:56.323966  507510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478.pem --> /usr/share/ca-certificates/475478.pem (1338 bytes)
	I0116 03:43:56.350120  507510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem --> /usr/share/ca-certificates/4754782.pem (1708 bytes)
	I0116 03:43:56.379523  507510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 03:43:56.406312  507510 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0116 03:43:56.426149  507510 ssh_runner.go:195] Run: openssl version
	I0116 03:43:56.432150  507510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 03:43:56.443200  507510 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:43:56.448268  507510 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 02:35 /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:43:56.448343  507510 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:43:56.454227  507510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 03:43:56.464467  507510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/475478.pem && ln -fs /usr/share/ca-certificates/475478.pem /etc/ssl/certs/475478.pem"
	I0116 03:43:56.474769  507510 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/475478.pem
	I0116 03:43:56.480143  507510 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 02:44 /usr/share/ca-certificates/475478.pem
	I0116 03:43:56.480228  507510 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/475478.pem
	I0116 03:43:56.487996  507510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/475478.pem /etc/ssl/certs/51391683.0"
	I0116 03:43:56.501097  507510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4754782.pem && ln -fs /usr/share/ca-certificates/4754782.pem /etc/ssl/certs/4754782.pem"
	I0116 03:43:56.513266  507510 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4754782.pem
	I0116 03:43:56.518806  507510 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 02:44 /usr/share/ca-certificates/4754782.pem
	I0116 03:43:56.518891  507510 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4754782.pem
	I0116 03:43:56.527891  507510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4754782.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 03:43:56.538719  507510 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 03:43:56.544298  507510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0116 03:43:56.551048  507510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0116 03:43:56.557847  507510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0116 03:43:56.567757  507510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0116 03:43:56.575977  507510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0116 03:43:56.584514  507510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0116 03:43:56.593191  507510 kubeadm.go:404] StartCluster: {Name:old-k8s-version-696770 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-696770 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.167 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 03:43:56.593333  507510 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0116 03:43:56.593408  507510 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 03:43:56.653791  507510 cri.go:89] found id: ""
	I0116 03:43:56.653899  507510 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0116 03:43:56.667037  507510 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0116 03:43:56.667078  507510 kubeadm.go:636] restartCluster start
	I0116 03:43:56.667164  507510 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0116 03:43:56.679734  507510 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:56.681241  507510 kubeconfig.go:92] found "old-k8s-version-696770" server: "https://192.168.61.167:8443"
	I0116 03:43:56.683942  507510 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0116 03:43:56.696409  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:43:56.696507  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:56.713120  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:57.196652  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:43:57.196826  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:57.213992  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:57.697096  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:43:57.697197  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:57.709671  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:58.197291  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:43:58.197401  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:58.214351  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:58.696893  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:43:58.697036  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:58.714549  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:59.197173  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:43:59.197304  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:59.213885  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:56.773238  507339 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.280261968s)
	I0116 03:43:56.773267  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:43:57.046716  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:43:57.123831  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:43:57.221179  507339 api_server.go:52] waiting for apiserver process to appear ...
	I0116 03:43:57.221300  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:43:57.721940  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:43:58.222437  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:43:58.722256  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:43:59.222191  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:43:59.721451  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:43:59.753520  507339 api_server.go:72] duration metric: took 2.532341035s to wait for apiserver process to appear ...
	I0116 03:43:59.753556  507339 api_server.go:88] waiting for apiserver healthz status ...
	I0116 03:43:59.753601  507339 api_server.go:253] Checking apiserver healthz at https://192.168.39.103:8443/healthz ...
	I0116 03:43:59.754176  507339 api_server.go:269] stopped: https://192.168.39.103:8443/healthz: Get "https://192.168.39.103:8443/healthz": dial tcp 192.168.39.103:8443: connect: connection refused
	I0116 03:44:00.253773  507339 api_server.go:253] Checking apiserver healthz at https://192.168.39.103:8443/healthz ...
	I0116 03:43:57.000501  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:43:57.070966  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | unable to find current IP address of domain default-k8s-diff-port-434445 in network mk-default-k8s-diff-port-434445
	I0116 03:43:57.071015  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | I0116 03:43:57.000987  508579 retry.go:31] will retry after 1.963105502s: waiting for machine to come up
	I0116 03:43:58.966528  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:43:58.967131  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | unable to find current IP address of domain default-k8s-diff-port-434445 in network mk-default-k8s-diff-port-434445
	I0116 03:43:58.967173  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | I0116 03:43:58.967069  508579 retry.go:31] will retry after 2.871455928s: waiting for machine to come up
	I0116 03:43:59.697215  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:43:59.697303  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:59.713992  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:00.196535  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:44:00.196649  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:00.212663  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:00.697276  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:44:00.697390  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:00.714622  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:01.197125  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:44:01.197242  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:01.214976  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:01.696506  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:44:01.696612  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:01.708204  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:02.197402  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:44:02.197519  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:02.211062  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:02.697230  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:44:02.697358  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:02.710340  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:03.196949  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:44:03.197047  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:03.213169  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:03.696657  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:44:03.696793  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:03.709422  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:04.196970  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:44:04.197083  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:04.209280  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:03.473725  507339 api_server.go:279] https://192.168.39.103:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 03:44:03.473764  507339 api_server.go:103] status: https://192.168.39.103:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 03:44:03.473784  507339 api_server.go:253] Checking apiserver healthz at https://192.168.39.103:8443/healthz ...
	I0116 03:44:03.531825  507339 api_server.go:279] https://192.168.39.103:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 03:44:03.531873  507339 api_server.go:103] status: https://192.168.39.103:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 03:44:03.754148  507339 api_server.go:253] Checking apiserver healthz at https://192.168.39.103:8443/healthz ...
	I0116 03:44:03.759138  507339 api_server.go:279] https://192.168.39.103:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 03:44:03.759171  507339 api_server.go:103] status: https://192.168.39.103:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 03:44:04.254321  507339 api_server.go:253] Checking apiserver healthz at https://192.168.39.103:8443/healthz ...
	I0116 03:44:04.259317  507339 api_server.go:279] https://192.168.39.103:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 03:44:04.259350  507339 api_server.go:103] status: https://192.168.39.103:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 03:44:04.753890  507339 api_server.go:253] Checking apiserver healthz at https://192.168.39.103:8443/healthz ...
	I0116 03:44:04.759714  507339 api_server.go:279] https://192.168.39.103:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 03:44:04.759747  507339 api_server.go:103] status: https://192.168.39.103:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 03:44:05.254582  507339 api_server.go:253] Checking apiserver healthz at https://192.168.39.103:8443/healthz ...
	I0116 03:44:05.264904  507339 api_server.go:279] https://192.168.39.103:8443/healthz returned 200:
	ok
	I0116 03:44:05.283700  507339 api_server.go:141] control plane version: v1.29.0-rc.2
	I0116 03:44:05.283737  507339 api_server.go:131] duration metric: took 5.53017208s to wait for apiserver health ...
	I0116 03:44:05.283749  507339 cni.go:84] Creating CNI manager for ""
	I0116 03:44:05.283757  507339 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:44:05.285715  507339 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 03:44:05.287393  507339 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 03:44:05.327883  507339 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 03:44:05.371856  507339 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 03:44:05.382614  507339 system_pods.go:59] 8 kube-system pods found
	I0116 03:44:05.382656  507339 system_pods.go:61] "coredns-76f75df574-lr95b" [15dc0b11-f7ec-4729-bbfa-79b9649fbad6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0116 03:44:05.382666  507339 system_pods.go:61] "etcd-no-preload-666547" [f98dcc57-5869-4a35-b443-2e6c9beddd88] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0116 03:44:05.382682  507339 system_pods.go:61] "kube-apiserver-no-preload-666547" [3aceae9d-224a-4e8c-a02d-ad4199d8d558] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0116 03:44:05.382699  507339 system_pods.go:61] "kube-controller-manager-no-preload-666547" [af39c89c-3b41-4d6f-b075-16de04a3ecc0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0116 03:44:05.382706  507339 system_pods.go:61] "kube-proxy-dcmrn" [1e91c96f-cbc5-424d-a09e-06e34bf7a2e2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0116 03:44:05.382714  507339 system_pods.go:61] "kube-scheduler-no-preload-666547" [0f2aa713-aebc-4858-a348-32169021235e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0116 03:44:05.382723  507339 system_pods.go:61] "metrics-server-57f55c9bc5-78vfj" [dbd2d3b2-ec0f-4253-8549-7c4299522c37] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:44:05.382735  507339 system_pods.go:61] "storage-provisioner" [f4e1ba45-217d-41d5-b583-2f60044879bc] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0116 03:44:05.382749  507339 system_pods.go:74] duration metric: took 10.858851ms to wait for pod list to return data ...
	I0116 03:44:05.382760  507339 node_conditions.go:102] verifying NodePressure condition ...
	I0116 03:44:05.391050  507339 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 03:44:05.391112  507339 node_conditions.go:123] node cpu capacity is 2
	I0116 03:44:05.391128  507339 node_conditions.go:105] duration metric: took 8.361426ms to run NodePressure ...
	I0116 03:44:05.391152  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:44:01.840907  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:01.841317  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | unable to find current IP address of domain default-k8s-diff-port-434445 in network mk-default-k8s-diff-port-434445
	I0116 03:44:01.841361  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | I0116 03:44:01.841259  508579 retry.go:31] will retry after 3.769759015s: waiting for machine to come up
	I0116 03:44:05.613594  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:05.614119  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | unable to find current IP address of domain default-k8s-diff-port-434445 in network mk-default-k8s-diff-port-434445
	I0116 03:44:05.614149  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | I0116 03:44:05.614062  508579 retry.go:31] will retry after 3.5833584s: waiting for machine to come up
	I0116 03:44:05.740205  507339 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0116 03:44:05.745269  507339 kubeadm.go:787] kubelet initialised
	I0116 03:44:05.745297  507339 kubeadm.go:788] duration metric: took 5.059802ms waiting for restarted kubelet to initialise ...
	I0116 03:44:05.745306  507339 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:44:05.751403  507339 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-lr95b" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:05.761740  507339 pod_ready.go:97] node "no-preload-666547" hosting pod "coredns-76f75df574-lr95b" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-666547" has status "Ready":"False"
	I0116 03:44:05.761784  507339 pod_ready.go:81] duration metric: took 10.344994ms waiting for pod "coredns-76f75df574-lr95b" in "kube-system" namespace to be "Ready" ...
	E0116 03:44:05.761796  507339 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-666547" hosting pod "coredns-76f75df574-lr95b" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-666547" has status "Ready":"False"
	I0116 03:44:05.761812  507339 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-666547" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:05.767627  507339 pod_ready.go:97] node "no-preload-666547" hosting pod "etcd-no-preload-666547" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-666547" has status "Ready":"False"
	I0116 03:44:05.767657  507339 pod_ready.go:81] duration metric: took 5.831478ms waiting for pod "etcd-no-preload-666547" in "kube-system" namespace to be "Ready" ...
	E0116 03:44:05.767669  507339 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-666547" hosting pod "etcd-no-preload-666547" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-666547" has status "Ready":"False"
	I0116 03:44:05.767677  507339 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-666547" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:05.772833  507339 pod_ready.go:97] node "no-preload-666547" hosting pod "kube-apiserver-no-preload-666547" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-666547" has status "Ready":"False"
	I0116 03:44:05.772863  507339 pod_ready.go:81] duration metric: took 5.17797ms waiting for pod "kube-apiserver-no-preload-666547" in "kube-system" namespace to be "Ready" ...
	E0116 03:44:05.772876  507339 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-666547" hosting pod "kube-apiserver-no-preload-666547" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-666547" has status "Ready":"False"
	I0116 03:44:05.772884  507339 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-666547" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:05.779234  507339 pod_ready.go:97] node "no-preload-666547" hosting pod "kube-controller-manager-no-preload-666547" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-666547" has status "Ready":"False"
	I0116 03:44:05.779259  507339 pod_ready.go:81] duration metric: took 6.362264ms waiting for pod "kube-controller-manager-no-preload-666547" in "kube-system" namespace to be "Ready" ...
	E0116 03:44:05.779270  507339 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-666547" hosting pod "kube-controller-manager-no-preload-666547" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-666547" has status "Ready":"False"
	I0116 03:44:05.779277  507339 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-dcmrn" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:06.175807  507339 pod_ready.go:97] node "no-preload-666547" hosting pod "kube-proxy-dcmrn" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-666547" has status "Ready":"False"
	I0116 03:44:06.175846  507339 pod_ready.go:81] duration metric: took 396.551923ms waiting for pod "kube-proxy-dcmrn" in "kube-system" namespace to be "Ready" ...
	E0116 03:44:06.175859  507339 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-666547" hosting pod "kube-proxy-dcmrn" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-666547" has status "Ready":"False"
	I0116 03:44:06.175867  507339 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-666547" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:06.580068  507339 pod_ready.go:97] node "no-preload-666547" hosting pod "kube-scheduler-no-preload-666547" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-666547" has status "Ready":"False"
	I0116 03:44:06.580102  507339 pod_ready.go:81] duration metric: took 404.226447ms waiting for pod "kube-scheduler-no-preload-666547" in "kube-system" namespace to be "Ready" ...
	E0116 03:44:06.580119  507339 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-666547" hosting pod "kube-scheduler-no-preload-666547" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-666547" has status "Ready":"False"
	I0116 03:44:06.580128  507339 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:06.976542  507339 pod_ready.go:97] node "no-preload-666547" hosting pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-666547" has status "Ready":"False"
	I0116 03:44:06.976573  507339 pod_ready.go:81] duration metric: took 396.432925ms waiting for pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace to be "Ready" ...
	E0116 03:44:06.976590  507339 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-666547" hosting pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-666547" has status "Ready":"False"
	I0116 03:44:06.976596  507339 pod_ready.go:38] duration metric: took 1.231281598s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:44:06.976621  507339 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0116 03:44:06.988884  507339 ops.go:34] apiserver oom_adj: -16
	I0116 03:44:06.988916  507339 kubeadm.go:640] restartCluster took 21.755069193s
	I0116 03:44:06.988940  507339 kubeadm.go:406] StartCluster complete in 21.811388098s
	I0116 03:44:06.988970  507339 settings.go:142] acquiring lock: {Name:mk3ded99c06d285b174e76622bb0741c8dd2d8fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:44:06.989066  507339 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17965-468241/kubeconfig
	I0116 03:44:06.990912  507339 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-468241/kubeconfig: {Name:mkac3c63bbcba9b4fa1bea3480797757d703fb9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:44:06.991191  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0116 03:44:06.991241  507339 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0116 03:44:06.991341  507339 addons.go:69] Setting storage-provisioner=true in profile "no-preload-666547"
	I0116 03:44:06.991362  507339 addons.go:234] Setting addon storage-provisioner=true in "no-preload-666547"
	I0116 03:44:06.991364  507339 addons.go:69] Setting default-storageclass=true in profile "no-preload-666547"
	W0116 03:44:06.991370  507339 addons.go:243] addon storage-provisioner should already be in state true
	I0116 03:44:06.991388  507339 addons.go:69] Setting metrics-server=true in profile "no-preload-666547"
	I0116 03:44:06.991397  507339 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-666547"
	I0116 03:44:06.991404  507339 addons.go:234] Setting addon metrics-server=true in "no-preload-666547"
	W0116 03:44:06.991412  507339 addons.go:243] addon metrics-server should already be in state true
	I0116 03:44:06.991438  507339 host.go:66] Checking if "no-preload-666547" exists ...
	I0116 03:44:06.991451  507339 host.go:66] Checking if "no-preload-666547" exists ...
	I0116 03:44:06.991460  507339 config.go:182] Loaded profile config "no-preload-666547": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0116 03:44:06.991855  507339 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:44:06.991855  507339 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:44:06.991893  507339 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:44:06.991858  507339 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:44:06.991940  507339 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:44:06.991976  507339 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:44:06.998037  507339 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-666547" context rescaled to 1 replicas
	I0116 03:44:06.998086  507339 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.103 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 03:44:07.000312  507339 out.go:177] * Verifying Kubernetes components...
	I0116 03:44:07.001889  507339 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:44:07.009057  507339 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41459
	I0116 03:44:07.009097  507339 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38693
	I0116 03:44:07.009596  507339 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:44:07.009735  507339 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:44:07.010178  507339 main.go:141] libmachine: Using API Version  1
	I0116 03:44:07.010195  507339 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:44:07.010368  507339 main.go:141] libmachine: Using API Version  1
	I0116 03:44:07.010392  507339 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:44:07.010412  507339 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33905
	I0116 03:44:07.010763  507339 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:44:07.010822  507339 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:44:07.010829  507339 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:44:07.010945  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetState
	I0116 03:44:07.011314  507339 main.go:141] libmachine: Using API Version  1
	I0116 03:44:07.011346  507339 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:44:07.011955  507339 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:44:07.011956  507339 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:44:07.012052  507339 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:44:07.012511  507339 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:44:07.012547  507339 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:44:07.015214  507339 addons.go:234] Setting addon default-storageclass=true in "no-preload-666547"
	W0116 03:44:07.015237  507339 addons.go:243] addon default-storageclass should already be in state true
	I0116 03:44:07.015269  507339 host.go:66] Checking if "no-preload-666547" exists ...
	I0116 03:44:07.015718  507339 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:44:07.015772  507339 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:44:07.029747  507339 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32807
	I0116 03:44:07.029990  507339 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35757
	I0116 03:44:07.030392  507339 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:44:07.030448  507339 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:44:07.030948  507339 main.go:141] libmachine: Using API Version  1
	I0116 03:44:07.030970  507339 main.go:141] libmachine: Using API Version  1
	I0116 03:44:07.030986  507339 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:44:07.031046  507339 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:44:07.031393  507339 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:44:07.031443  507339 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:44:07.031603  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetState
	I0116 03:44:07.031660  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetState
	I0116 03:44:07.033898  507339 main.go:141] libmachine: (no-preload-666547) Calling .DriverName
	I0116 03:44:07.033990  507339 main.go:141] libmachine: (no-preload-666547) Calling .DriverName
	I0116 03:44:07.036581  507339 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0116 03:44:07.034407  507339 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38687
	I0116 03:44:07.038382  507339 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0116 03:44:07.038420  507339 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0116 03:44:07.038444  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHHostname
	I0116 03:44:07.038499  507339 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:44:07.040190  507339 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 03:44:07.040211  507339 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0116 03:44:07.040232  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHHostname
	I0116 03:44:07.039010  507339 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:44:07.040908  507339 main.go:141] libmachine: Using API Version  1
	I0116 03:44:07.040931  507339 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:44:07.041538  507339 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:44:07.042268  507339 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:44:07.042319  507339 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:44:07.043270  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:44:07.043665  507339 main.go:141] libmachine: (no-preload-666547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:5f:03", ip: ""} in network mk-no-preload-666547: {Iface:virbr2 ExpiryTime:2024-01-16 04:43:15 +0000 UTC Type:0 Mac:52:54:00:4e:5f:03 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:no-preload-666547 Clientid:01:52:54:00:4e:5f:03}
	I0116 03:44:07.043697  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined IP address 192.168.39.103 and MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:44:07.043730  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:44:07.043966  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHPort
	I0116 03:44:07.044196  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHKeyPath
	I0116 03:44:07.044381  507339 main.go:141] libmachine: (no-preload-666547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:5f:03", ip: ""} in network mk-no-preload-666547: {Iface:virbr2 ExpiryTime:2024-01-16 04:43:15 +0000 UTC Type:0 Mac:52:54:00:4e:5f:03 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:no-preload-666547 Clientid:01:52:54:00:4e:5f:03}
	I0116 03:44:07.044422  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined IP address 192.168.39.103 and MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:44:07.044456  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHUsername
	I0116 03:44:07.044566  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHPort
	I0116 03:44:07.044691  507339 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/no-preload-666547/id_rsa Username:docker}
	I0116 03:44:07.044716  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHKeyPath
	I0116 03:44:07.044878  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHUsername
	I0116 03:44:07.045028  507339 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/no-preload-666547/id_rsa Username:docker}
	I0116 03:44:07.084507  507339 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46191
	I0116 03:44:07.085014  507339 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:44:07.085601  507339 main.go:141] libmachine: Using API Version  1
	I0116 03:44:07.085636  507339 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:44:07.086005  507339 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:44:07.086202  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetState
	I0116 03:44:07.088199  507339 main.go:141] libmachine: (no-preload-666547) Calling .DriverName
	I0116 03:44:07.088513  507339 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0116 03:44:07.088532  507339 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0116 03:44:07.088555  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHHostname
	I0116 03:44:07.092194  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:44:07.092719  507339 main.go:141] libmachine: (no-preload-666547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:5f:03", ip: ""} in network mk-no-preload-666547: {Iface:virbr2 ExpiryTime:2024-01-16 04:43:15 +0000 UTC Type:0 Mac:52:54:00:4e:5f:03 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:no-preload-666547 Clientid:01:52:54:00:4e:5f:03}
	I0116 03:44:07.092745  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined IP address 192.168.39.103 and MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:44:07.092953  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHPort
	I0116 03:44:07.093219  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHKeyPath
	I0116 03:44:07.093384  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHUsername
	I0116 03:44:07.093590  507339 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/no-preload-666547/id_rsa Username:docker}
	I0116 03:44:07.196191  507339 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0116 03:44:07.196219  507339 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0116 03:44:07.201036  507339 node_ready.go:35] waiting up to 6m0s for node "no-preload-666547" to be "Ready" ...
	I0116 03:44:07.201055  507339 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0116 03:44:07.222924  507339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0116 03:44:07.224548  507339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 03:44:07.237091  507339 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0116 03:44:07.237119  507339 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0116 03:44:07.289312  507339 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 03:44:07.289342  507339 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0116 03:44:07.334708  507339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 03:44:07.583740  507339 main.go:141] libmachine: Making call to close driver server
	I0116 03:44:07.583773  507339 main.go:141] libmachine: (no-preload-666547) Calling .Close
	I0116 03:44:07.584079  507339 main.go:141] libmachine: (no-preload-666547) DBG | Closing plugin on server side
	I0116 03:44:07.584135  507339 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:44:07.584146  507339 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:44:07.584155  507339 main.go:141] libmachine: Making call to close driver server
	I0116 03:44:07.584170  507339 main.go:141] libmachine: (no-preload-666547) Calling .Close
	I0116 03:44:07.584405  507339 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:44:07.584423  507339 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:44:07.592304  507339 main.go:141] libmachine: Making call to close driver server
	I0116 03:44:07.592332  507339 main.go:141] libmachine: (no-preload-666547) Calling .Close
	I0116 03:44:07.592608  507339 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:44:07.592656  507339 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:44:07.592663  507339 main.go:141] libmachine: (no-preload-666547) DBG | Closing plugin on server side
	I0116 03:44:08.290558  507339 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.065965685s)
	I0116 03:44:08.290643  507339 main.go:141] libmachine: Making call to close driver server
	I0116 03:44:08.290665  507339 main.go:141] libmachine: (no-preload-666547) Calling .Close
	I0116 03:44:08.291042  507339 main.go:141] libmachine: (no-preload-666547) DBG | Closing plugin on server side
	I0116 03:44:08.291103  507339 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:44:08.291121  507339 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:44:08.291136  507339 main.go:141] libmachine: Making call to close driver server
	I0116 03:44:08.291147  507339 main.go:141] libmachine: (no-preload-666547) Calling .Close
	I0116 03:44:08.291380  507339 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:44:08.291396  507339 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:44:08.291416  507339 main.go:141] libmachine: (no-preload-666547) DBG | Closing plugin on server side
	I0116 03:44:08.468146  507339 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.133348135s)
	I0116 03:44:08.468223  507339 main.go:141] libmachine: Making call to close driver server
	I0116 03:44:08.468248  507339 main.go:141] libmachine: (no-preload-666547) Calling .Close
	I0116 03:44:08.470360  507339 main.go:141] libmachine: (no-preload-666547) DBG | Closing plugin on server side
	I0116 03:44:08.470367  507339 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:44:08.470397  507339 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:44:08.470412  507339 main.go:141] libmachine: Making call to close driver server
	I0116 03:44:08.470423  507339 main.go:141] libmachine: (no-preload-666547) Calling .Close
	I0116 03:44:08.470734  507339 main.go:141] libmachine: (no-preload-666547) DBG | Closing plugin on server side
	I0116 03:44:08.470749  507339 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:44:08.470764  507339 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:44:08.470776  507339 addons.go:470] Verifying addon metrics-server=true in "no-preload-666547"
	I0116 03:44:08.473092  507339 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0116 03:44:04.697359  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:44:04.697510  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:04.714690  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:05.197225  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:44:05.197333  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:05.213923  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:05.696541  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:44:05.696632  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:05.713744  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:06.197249  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:44:06.197369  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:06.209148  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:06.696967  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:44:06.697083  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:06.709624  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:06.709656  507510 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0116 03:44:06.709665  507510 kubeadm.go:1135] stopping kube-system containers ...
	I0116 03:44:06.709676  507510 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0116 03:44:06.709736  507510 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 03:44:06.753286  507510 cri.go:89] found id: ""
	I0116 03:44:06.753370  507510 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0116 03:44:06.769990  507510 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 03:44:06.781090  507510 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 03:44:06.781168  507510 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 03:44:06.790936  507510 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0116 03:44:06.790971  507510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:44:06.915790  507510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:44:08.112494  507510 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.196668404s)
	I0116 03:44:08.112528  507510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:44:08.328365  507510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:44:08.435410  507510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:44:08.576950  507510 api_server.go:52] waiting for apiserver process to appear ...
	I0116 03:44:08.577077  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:44:09.077263  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:44:08.474544  507339 addons.go:505] enable addons completed in 1.483307386s: enabled=[default-storageclass storage-provisioner metrics-server]
	I0116 03:44:09.206584  507339 node_ready.go:58] node "no-preload-666547" has status "Ready":"False"
	I0116 03:44:10.997580  507257 start.go:369] acquired machines lock for "embed-certs-615980" in 1m2.194717115s
	I0116 03:44:10.997669  507257 start.go:96] Skipping create...Using existing machine configuration
	I0116 03:44:10.997681  507257 fix.go:54] fixHost starting: 
	I0116 03:44:10.998101  507257 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:44:10.998135  507257 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:44:11.017060  507257 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38343
	I0116 03:44:11.017687  507257 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:44:11.018295  507257 main.go:141] libmachine: Using API Version  1
	I0116 03:44:11.018326  507257 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:44:11.018673  507257 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:44:11.018879  507257 main.go:141] libmachine: (embed-certs-615980) Calling .DriverName
	I0116 03:44:11.019056  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetState
	I0116 03:44:11.021360  507257 fix.go:102] recreateIfNeeded on embed-certs-615980: state=Stopped err=<nil>
	I0116 03:44:11.021396  507257 main.go:141] libmachine: (embed-certs-615980) Calling .DriverName
	W0116 03:44:11.021577  507257 fix.go:128] unexpected machine state, will restart: <nil>
	I0116 03:44:11.023462  507257 out.go:177] * Restarting existing kvm2 VM for "embed-certs-615980" ...
	I0116 03:44:11.025158  507257 main.go:141] libmachine: (embed-certs-615980) Calling .Start
	I0116 03:44:11.025397  507257 main.go:141] libmachine: (embed-certs-615980) Ensuring networks are active...
	I0116 03:44:11.026354  507257 main.go:141] libmachine: (embed-certs-615980) Ensuring network default is active
	I0116 03:44:11.026830  507257 main.go:141] libmachine: (embed-certs-615980) Ensuring network mk-embed-certs-615980 is active
	I0116 03:44:11.027263  507257 main.go:141] libmachine: (embed-certs-615980) Getting domain xml...
	I0116 03:44:11.028182  507257 main.go:141] libmachine: (embed-certs-615980) Creating domain...
	I0116 03:44:09.198824  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:09.199284  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Found IP for machine: 192.168.50.236
	I0116 03:44:09.199318  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Reserving static IP address...
	I0116 03:44:09.199348  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has current primary IP address 192.168.50.236 and MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:09.199756  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-434445", mac: "52:54:00:78:ea:d5", ip: "192.168.50.236"} in network mk-default-k8s-diff-port-434445: {Iface:virbr1 ExpiryTime:2024-01-16 04:44:01 +0000 UTC Type:0 Mac:52:54:00:78:ea:d5 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:default-k8s-diff-port-434445 Clientid:01:52:54:00:78:ea:d5}
	I0116 03:44:09.199781  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | skip adding static IP to network mk-default-k8s-diff-port-434445 - found existing host DHCP lease matching {name: "default-k8s-diff-port-434445", mac: "52:54:00:78:ea:d5", ip: "192.168.50.236"}
	I0116 03:44:09.199794  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Reserved static IP address: 192.168.50.236
	I0116 03:44:09.199808  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for SSH to be available...
	I0116 03:44:09.199832  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | Getting to WaitForSSH function...
	I0116 03:44:09.202093  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:09.202494  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ea:d5", ip: ""} in network mk-default-k8s-diff-port-434445: {Iface:virbr1 ExpiryTime:2024-01-16 04:44:01 +0000 UTC Type:0 Mac:52:54:00:78:ea:d5 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:default-k8s-diff-port-434445 Clientid:01:52:54:00:78:ea:d5}
	I0116 03:44:09.202529  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined IP address 192.168.50.236 and MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:09.202664  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | Using SSH client type: external
	I0116 03:44:09.202690  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | Using SSH private key: /home/jenkins/minikube-integration/17965-468241/.minikube/machines/default-k8s-diff-port-434445/id_rsa (-rw-------)
	I0116 03:44:09.202723  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.236 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17965-468241/.minikube/machines/default-k8s-diff-port-434445/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0116 03:44:09.202746  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | About to run SSH command:
	I0116 03:44:09.202763  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | exit 0
	I0116 03:44:09.302425  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | SSH cmd err, output: <nil>: 
	I0116 03:44:09.302867  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetConfigRaw
	I0116 03:44:09.303666  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetIP
	I0116 03:44:09.306482  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:09.306884  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ea:d5", ip: ""} in network mk-default-k8s-diff-port-434445: {Iface:virbr1 ExpiryTime:2024-01-16 04:44:01 +0000 UTC Type:0 Mac:52:54:00:78:ea:d5 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:default-k8s-diff-port-434445 Clientid:01:52:54:00:78:ea:d5}
	I0116 03:44:09.306920  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined IP address 192.168.50.236 and MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:09.307189  507889 profile.go:148] Saving config to /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/default-k8s-diff-port-434445/config.json ...
	I0116 03:44:09.307418  507889 machine.go:88] provisioning docker machine ...
	I0116 03:44:09.307437  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .DriverName
	I0116 03:44:09.307673  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetMachineName
	I0116 03:44:09.307865  507889 buildroot.go:166] provisioning hostname "default-k8s-diff-port-434445"
	I0116 03:44:09.307886  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetMachineName
	I0116 03:44:09.308073  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHHostname
	I0116 03:44:09.310375  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:09.310726  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ea:d5", ip: ""} in network mk-default-k8s-diff-port-434445: {Iface:virbr1 ExpiryTime:2024-01-16 04:44:01 +0000 UTC Type:0 Mac:52:54:00:78:ea:d5 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:default-k8s-diff-port-434445 Clientid:01:52:54:00:78:ea:d5}
	I0116 03:44:09.310765  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined IP address 192.168.50.236 and MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:09.310920  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHPort
	I0116 03:44:09.311111  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHKeyPath
	I0116 03:44:09.311231  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHKeyPath
	I0116 03:44:09.311384  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHUsername
	I0116 03:44:09.311528  507889 main.go:141] libmachine: Using SSH client type: native
	I0116 03:44:09.311932  507889 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.50.236 22 <nil> <nil>}
	I0116 03:44:09.311949  507889 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-434445 && echo "default-k8s-diff-port-434445" | sudo tee /etc/hostname
	I0116 03:44:09.469340  507889 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-434445
	
	I0116 03:44:09.469384  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHHostname
	I0116 03:44:09.472788  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:09.473108  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ea:d5", ip: ""} in network mk-default-k8s-diff-port-434445: {Iface:virbr1 ExpiryTime:2024-01-16 04:44:01 +0000 UTC Type:0 Mac:52:54:00:78:ea:d5 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:default-k8s-diff-port-434445 Clientid:01:52:54:00:78:ea:d5}
	I0116 03:44:09.473166  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined IP address 192.168.50.236 and MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:09.473353  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHPort
	I0116 03:44:09.473571  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHKeyPath
	I0116 03:44:09.473768  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHKeyPath
	I0116 03:44:09.473963  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHUsername
	I0116 03:44:09.474171  507889 main.go:141] libmachine: Using SSH client type: native
	I0116 03:44:09.474626  507889 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.50.236 22 <nil> <nil>}
	I0116 03:44:09.474657  507889 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-434445' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-434445/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-434445' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 03:44:09.622177  507889 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 03:44:09.622223  507889 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17965-468241/.minikube CaCertPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17965-468241/.minikube}
	I0116 03:44:09.622253  507889 buildroot.go:174] setting up certificates
	I0116 03:44:09.622267  507889 provision.go:83] configureAuth start
	I0116 03:44:09.622280  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetMachineName
	I0116 03:44:09.622649  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetIP
	I0116 03:44:09.625970  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:09.626394  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ea:d5", ip: ""} in network mk-default-k8s-diff-port-434445: {Iface:virbr1 ExpiryTime:2024-01-16 04:44:01 +0000 UTC Type:0 Mac:52:54:00:78:ea:d5 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:default-k8s-diff-port-434445 Clientid:01:52:54:00:78:ea:d5}
	I0116 03:44:09.626429  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined IP address 192.168.50.236 and MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:09.626603  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHHostname
	I0116 03:44:09.629623  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:09.630022  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ea:d5", ip: ""} in network mk-default-k8s-diff-port-434445: {Iface:virbr1 ExpiryTime:2024-01-16 04:44:01 +0000 UTC Type:0 Mac:52:54:00:78:ea:d5 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:default-k8s-diff-port-434445 Clientid:01:52:54:00:78:ea:d5}
	I0116 03:44:09.630052  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined IP address 192.168.50.236 and MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:09.630263  507889 provision.go:138] copyHostCerts
	I0116 03:44:09.630354  507889 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem, removing ...
	I0116 03:44:09.630370  507889 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem
	I0116 03:44:09.630449  507889 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem (1078 bytes)
	I0116 03:44:09.630603  507889 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem, removing ...
	I0116 03:44:09.630626  507889 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem
	I0116 03:44:09.630661  507889 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem (1123 bytes)
	I0116 03:44:09.630760  507889 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem, removing ...
	I0116 03:44:09.630775  507889 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem
	I0116 03:44:09.630805  507889 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem (1679 bytes)
	I0116 03:44:09.630891  507889 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-434445 san=[192.168.50.236 192.168.50.236 localhost 127.0.0.1 minikube default-k8s-diff-port-434445]
	I0116 03:44:10.127058  507889 provision.go:172] copyRemoteCerts
	I0116 03:44:10.127138  507889 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 03:44:10.127175  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHHostname
	I0116 03:44:10.130572  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:10.131095  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ea:d5", ip: ""} in network mk-default-k8s-diff-port-434445: {Iface:virbr1 ExpiryTime:2024-01-16 04:44:01 +0000 UTC Type:0 Mac:52:54:00:78:ea:d5 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:default-k8s-diff-port-434445 Clientid:01:52:54:00:78:ea:d5}
	I0116 03:44:10.131133  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined IP address 192.168.50.236 and MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:10.131313  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHPort
	I0116 03:44:10.131590  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHKeyPath
	I0116 03:44:10.131825  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHUsername
	I0116 03:44:10.132001  507889 sshutil.go:53] new ssh client: &{IP:192.168.50.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/default-k8s-diff-port-434445/id_rsa Username:docker}
	I0116 03:44:10.238263  507889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0116 03:44:10.269567  507889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0116 03:44:10.295065  507889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0116 03:44:10.323347  507889 provision.go:86] duration metric: configureAuth took 701.062063ms
	I0116 03:44:10.323391  507889 buildroot.go:189] setting minikube options for container-runtime
	I0116 03:44:10.323667  507889 config.go:182] Loaded profile config "default-k8s-diff-port-434445": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 03:44:10.323774  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHHostname
	I0116 03:44:10.326825  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:10.327222  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ea:d5", ip: ""} in network mk-default-k8s-diff-port-434445: {Iface:virbr1 ExpiryTime:2024-01-16 04:44:01 +0000 UTC Type:0 Mac:52:54:00:78:ea:d5 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:default-k8s-diff-port-434445 Clientid:01:52:54:00:78:ea:d5}
	I0116 03:44:10.327266  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined IP address 192.168.50.236 and MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:10.327423  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHPort
	I0116 03:44:10.327682  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHKeyPath
	I0116 03:44:10.327883  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHKeyPath
	I0116 03:44:10.328077  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHUsername
	I0116 03:44:10.328269  507889 main.go:141] libmachine: Using SSH client type: native
	I0116 03:44:10.328743  507889 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.50.236 22 <nil> <nil>}
	I0116 03:44:10.328778  507889 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 03:44:10.700188  507889 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 03:44:10.700221  507889 machine.go:91] provisioned docker machine in 1.392790129s
	I0116 03:44:10.700232  507889 start.go:300] post-start starting for "default-k8s-diff-port-434445" (driver="kvm2")
	I0116 03:44:10.700244  507889 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 03:44:10.700261  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .DriverName
	I0116 03:44:10.700745  507889 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 03:44:10.700786  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHHostname
	I0116 03:44:10.704466  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:10.705001  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ea:d5", ip: ""} in network mk-default-k8s-diff-port-434445: {Iface:virbr1 ExpiryTime:2024-01-16 04:44:01 +0000 UTC Type:0 Mac:52:54:00:78:ea:d5 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:default-k8s-diff-port-434445 Clientid:01:52:54:00:78:ea:d5}
	I0116 03:44:10.705045  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined IP address 192.168.50.236 and MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:10.705278  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHPort
	I0116 03:44:10.705503  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHKeyPath
	I0116 03:44:10.705735  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHUsername
	I0116 03:44:10.705912  507889 sshutil.go:53] new ssh client: &{IP:192.168.50.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/default-k8s-diff-port-434445/id_rsa Username:docker}
	I0116 03:44:10.807625  507889 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 03:44:10.813392  507889 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 03:44:10.813428  507889 filesync.go:126] Scanning /home/jenkins/minikube-integration/17965-468241/.minikube/addons for local assets ...
	I0116 03:44:10.813519  507889 filesync.go:126] Scanning /home/jenkins/minikube-integration/17965-468241/.minikube/files for local assets ...
	I0116 03:44:10.813596  507889 filesync.go:149] local asset: /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem -> 4754782.pem in /etc/ssl/certs
	I0116 03:44:10.813687  507889 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 03:44:10.824028  507889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem --> /etc/ssl/certs/4754782.pem (1708 bytes)
	I0116 03:44:10.853453  507889 start.go:303] post-start completed in 153.201453ms
	I0116 03:44:10.853493  507889 fix.go:56] fixHost completed within 22.144172966s
	I0116 03:44:10.853543  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHHostname
	I0116 03:44:10.856529  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:10.856907  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ea:d5", ip: ""} in network mk-default-k8s-diff-port-434445: {Iface:virbr1 ExpiryTime:2024-01-16 04:44:01 +0000 UTC Type:0 Mac:52:54:00:78:ea:d5 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:default-k8s-diff-port-434445 Clientid:01:52:54:00:78:ea:d5}
	I0116 03:44:10.856967  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined IP address 192.168.50.236 and MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:10.857185  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHPort
	I0116 03:44:10.857438  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHKeyPath
	I0116 03:44:10.857636  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHKeyPath
	I0116 03:44:10.857790  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHUsername
	I0116 03:44:10.857974  507889 main.go:141] libmachine: Using SSH client type: native
	I0116 03:44:10.858502  507889 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.50.236 22 <nil> <nil>}
	I0116 03:44:10.858528  507889 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0116 03:44:10.997398  507889 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705376650.933903671
	
	I0116 03:44:10.997426  507889 fix.go:206] guest clock: 1705376650.933903671
	I0116 03:44:10.997436  507889 fix.go:219] Guest: 2024-01-16 03:44:10.933903671 +0000 UTC Remote: 2024-01-16 03:44:10.853498317 +0000 UTC m=+234.302480786 (delta=80.405354ms)
	I0116 03:44:10.997464  507889 fix.go:190] guest clock delta is within tolerance: 80.405354ms
	I0116 03:44:10.997471  507889 start.go:83] releasing machines lock for "default-k8s-diff-port-434445", held for 22.288188395s
	I0116 03:44:10.997517  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .DriverName
	I0116 03:44:10.997857  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetIP
	I0116 03:44:11.001410  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:11.001814  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ea:d5", ip: ""} in network mk-default-k8s-diff-port-434445: {Iface:virbr1 ExpiryTime:2024-01-16 04:44:01 +0000 UTC Type:0 Mac:52:54:00:78:ea:d5 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:default-k8s-diff-port-434445 Clientid:01:52:54:00:78:ea:d5}
	I0116 03:44:11.001864  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined IP address 192.168.50.236 and MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:11.002016  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .DriverName
	I0116 03:44:11.002649  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .DriverName
	I0116 03:44:11.002923  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .DriverName
	I0116 03:44:11.003015  507889 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 03:44:11.003068  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHHostname
	I0116 03:44:11.003258  507889 ssh_runner.go:195] Run: cat /version.json
	I0116 03:44:11.003294  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHHostname
	I0116 03:44:11.006383  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:11.006699  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:11.006800  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ea:d5", ip: ""} in network mk-default-k8s-diff-port-434445: {Iface:virbr1 ExpiryTime:2024-01-16 04:44:01 +0000 UTC Type:0 Mac:52:54:00:78:ea:d5 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:default-k8s-diff-port-434445 Clientid:01:52:54:00:78:ea:d5}
	I0116 03:44:11.006850  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined IP address 192.168.50.236 and MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:11.007123  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHPort
	I0116 03:44:11.007230  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ea:d5", ip: ""} in network mk-default-k8s-diff-port-434445: {Iface:virbr1 ExpiryTime:2024-01-16 04:44:01 +0000 UTC Type:0 Mac:52:54:00:78:ea:d5 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:default-k8s-diff-port-434445 Clientid:01:52:54:00:78:ea:d5}
	I0116 03:44:11.007330  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined IP address 192.168.50.236 and MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:11.007353  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHKeyPath
	I0116 03:44:11.007378  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHPort
	I0116 03:44:11.007585  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHUsername
	I0116 03:44:11.007597  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHKeyPath
	I0116 03:44:11.007737  507889 sshutil.go:53] new ssh client: &{IP:192.168.50.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/default-k8s-diff-port-434445/id_rsa Username:docker}
	I0116 03:44:11.007795  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHUsername
	I0116 03:44:11.007980  507889 sshutil.go:53] new ssh client: &{IP:192.168.50.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/default-k8s-diff-port-434445/id_rsa Username:docker}
	I0116 03:44:11.139882  507889 ssh_runner.go:195] Run: systemctl --version
	I0116 03:44:11.147082  507889 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 03:44:11.317582  507889 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0116 03:44:11.324567  507889 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 03:44:11.324656  507889 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 03:44:11.348193  507889 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0116 03:44:11.348225  507889 start.go:475] detecting cgroup driver to use...
	I0116 03:44:11.348319  507889 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 03:44:11.367049  507889 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 03:44:11.386632  507889 docker.go:217] disabling cri-docker service (if available) ...
	I0116 03:44:11.386713  507889 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 03:44:11.409551  507889 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 03:44:11.424599  507889 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 03:44:11.586480  507889 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 03:44:11.733770  507889 docker.go:233] disabling docker service ...
	I0116 03:44:11.733855  507889 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 03:44:11.751184  507889 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 03:44:11.766970  507889 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 03:44:11.903645  507889 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 03:44:12.017100  507889 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 03:44:12.031725  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 03:44:12.052091  507889 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0116 03:44:12.052179  507889 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:44:12.063115  507889 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 03:44:12.063219  507889 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:44:12.073109  507889 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:44:12.083438  507889 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:44:12.095783  507889 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 03:44:12.107816  507889 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 03:44:12.117997  507889 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0116 03:44:12.118077  507889 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0116 03:44:12.132997  507889 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 03:44:12.145200  507889 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 03:44:12.266786  507889 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 03:44:12.460779  507889 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 03:44:12.460892  507889 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 03:44:12.469200  507889 start.go:543] Will wait 60s for crictl version
	I0116 03:44:12.469305  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:44:12.473761  507889 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 03:44:12.536262  507889 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0116 03:44:12.536382  507889 ssh_runner.go:195] Run: crio --version
	I0116 03:44:12.593212  507889 ssh_runner.go:195] Run: crio --version
	I0116 03:44:12.650197  507889 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0116 03:44:09.577389  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:44:10.077774  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:44:10.578076  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:44:10.613091  507510 api_server.go:72] duration metric: took 2.036140794s to wait for apiserver process to appear ...
	I0116 03:44:10.613124  507510 api_server.go:88] waiting for apiserver healthz status ...
	I0116 03:44:10.613148  507510 api_server.go:253] Checking apiserver healthz at https://192.168.61.167:8443/healthz ...
	I0116 03:44:11.706731  507339 node_ready.go:58] node "no-preload-666547" has status "Ready":"False"
	I0116 03:44:13.713926  507339 node_ready.go:49] node "no-preload-666547" has status "Ready":"True"
	I0116 03:44:13.713958  507339 node_ready.go:38] duration metric: took 6.512893933s waiting for node "no-preload-666547" to be "Ready" ...
	I0116 03:44:13.713972  507339 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:44:13.727930  507339 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-lr95b" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:14.740352  507339 pod_ready.go:92] pod "coredns-76f75df574-lr95b" in "kube-system" namespace has status "Ready":"True"
	I0116 03:44:14.740392  507339 pod_ready.go:81] duration metric: took 1.012371035s waiting for pod "coredns-76f75df574-lr95b" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:14.740408  507339 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-666547" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:11.442223  507257 main.go:141] libmachine: (embed-certs-615980) Waiting to get IP...
	I0116 03:44:11.443346  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:11.443787  507257 main.go:141] libmachine: (embed-certs-615980) DBG | unable to find current IP address of domain embed-certs-615980 in network mk-embed-certs-615980
	I0116 03:44:11.443851  507257 main.go:141] libmachine: (embed-certs-615980) DBG | I0116 03:44:11.443761  508731 retry.go:31] will retry after 306.7144ms: waiting for machine to come up
	I0116 03:44:11.752574  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:11.753186  507257 main.go:141] libmachine: (embed-certs-615980) DBG | unable to find current IP address of domain embed-certs-615980 in network mk-embed-certs-615980
	I0116 03:44:11.753217  507257 main.go:141] libmachine: (embed-certs-615980) DBG | I0116 03:44:11.753126  508731 retry.go:31] will retry after 270.011585ms: waiting for machine to come up
	I0116 03:44:12.024942  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:12.025507  507257 main.go:141] libmachine: (embed-certs-615980) DBG | unable to find current IP address of domain embed-certs-615980 in network mk-embed-certs-615980
	I0116 03:44:12.025548  507257 main.go:141] libmachine: (embed-certs-615980) DBG | I0116 03:44:12.025433  508731 retry.go:31] will retry after 328.680313ms: waiting for machine to come up
	I0116 03:44:12.355989  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:12.356557  507257 main.go:141] libmachine: (embed-certs-615980) DBG | unable to find current IP address of domain embed-certs-615980 in network mk-embed-certs-615980
	I0116 03:44:12.356582  507257 main.go:141] libmachine: (embed-certs-615980) DBG | I0116 03:44:12.356493  508731 retry.go:31] will retry after 598.194786ms: waiting for machine to come up
	I0116 03:44:12.956170  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:12.956754  507257 main.go:141] libmachine: (embed-certs-615980) DBG | unable to find current IP address of domain embed-certs-615980 in network mk-embed-certs-615980
	I0116 03:44:12.956782  507257 main.go:141] libmachine: (embed-certs-615980) DBG | I0116 03:44:12.956673  508731 retry.go:31] will retry after 713.891978ms: waiting for machine to come up
	I0116 03:44:13.672728  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:13.673741  507257 main.go:141] libmachine: (embed-certs-615980) DBG | unable to find current IP address of domain embed-certs-615980 in network mk-embed-certs-615980
	I0116 03:44:13.673772  507257 main.go:141] libmachine: (embed-certs-615980) DBG | I0116 03:44:13.673636  508731 retry.go:31] will retry after 789.579297ms: waiting for machine to come up
	I0116 03:44:14.464913  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:14.465532  507257 main.go:141] libmachine: (embed-certs-615980) DBG | unable to find current IP address of domain embed-certs-615980 in network mk-embed-certs-615980
	I0116 03:44:14.465567  507257 main.go:141] libmachine: (embed-certs-615980) DBG | I0116 03:44:14.465446  508731 retry.go:31] will retry after 744.319122ms: waiting for machine to come up
	I0116 03:44:15.211748  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:15.212356  507257 main.go:141] libmachine: (embed-certs-615980) DBG | unable to find current IP address of domain embed-certs-615980 in network mk-embed-certs-615980
	I0116 03:44:15.212389  507257 main.go:141] libmachine: (embed-certs-615980) DBG | I0116 03:44:15.212282  508731 retry.go:31] will retry after 1.231175582s: waiting for machine to come up
	I0116 03:44:12.652092  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetIP
	I0116 03:44:12.655815  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:12.656308  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ea:d5", ip: ""} in network mk-default-k8s-diff-port-434445: {Iface:virbr1 ExpiryTime:2024-01-16 04:44:01 +0000 UTC Type:0 Mac:52:54:00:78:ea:d5 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:default-k8s-diff-port-434445 Clientid:01:52:54:00:78:ea:d5}
	I0116 03:44:12.656383  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined IP address 192.168.50.236 and MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:12.656790  507889 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0116 03:44:12.661880  507889 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 03:44:12.677695  507889 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 03:44:12.677794  507889 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 03:44:12.731676  507889 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0116 03:44:12.731794  507889 ssh_runner.go:195] Run: which lz4
	I0116 03:44:12.736614  507889 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0116 03:44:12.741554  507889 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0116 03:44:12.741595  507889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0116 03:44:15.047223  507889 crio.go:444] Took 2.310653 seconds to copy over tarball
	I0116 03:44:15.047386  507889 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0116 03:44:15.614559  507510 api_server.go:269] stopped: https://192.168.61.167:8443/healthz: Get "https://192.168.61.167:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0116 03:44:15.614617  507510 api_server.go:253] Checking apiserver healthz at https://192.168.61.167:8443/healthz ...
	I0116 03:44:16.992197  507510 api_server.go:279] https://192.168.61.167:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 03:44:16.992236  507510 api_server.go:103] status: https://192.168.61.167:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 03:44:16.992255  507510 api_server.go:253] Checking apiserver healthz at https://192.168.61.167:8443/healthz ...
	I0116 03:44:17.098327  507510 api_server.go:279] https://192.168.61.167:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 03:44:17.098365  507510 api_server.go:103] status: https://192.168.61.167:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 03:44:17.113518  507510 api_server.go:253] Checking apiserver healthz at https://192.168.61.167:8443/healthz ...
	I0116 03:44:17.133276  507510 api_server.go:279] https://192.168.61.167:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W0116 03:44:17.133308  507510 api_server.go:103] status: https://192.168.61.167:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I0116 03:44:17.613843  507510 api_server.go:253] Checking apiserver healthz at https://192.168.61.167:8443/healthz ...
	I0116 03:44:17.621074  507510 api_server.go:279] https://192.168.61.167:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0116 03:44:17.621131  507510 api_server.go:103] status: https://192.168.61.167:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0116 03:44:18.113648  507510 api_server.go:253] Checking apiserver healthz at https://192.168.61.167:8443/healthz ...
	I0116 03:44:18.936452  507510 api_server.go:279] https://192.168.61.167:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0116 03:44:18.936492  507510 api_server.go:103] status: https://192.168.61.167:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0116 03:44:18.936521  507510 api_server.go:253] Checking apiserver healthz at https://192.168.61.167:8443/healthz ...
	I0116 03:44:19.466220  507510 api_server.go:279] https://192.168.61.167:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0116 03:44:19.466259  507510 api_server.go:103] status: https://192.168.61.167:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0116 03:44:19.466278  507510 api_server.go:253] Checking apiserver healthz at https://192.168.61.167:8443/healthz ...
	I0116 03:44:16.750170  507339 pod_ready.go:102] pod "etcd-no-preload-666547" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:19.438168  507339 pod_ready.go:92] pod "etcd-no-preload-666547" in "kube-system" namespace has status "Ready":"True"
	I0116 03:44:19.438207  507339 pod_ready.go:81] duration metric: took 4.697789344s waiting for pod "etcd-no-preload-666547" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:19.438224  507339 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-666547" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:19.445845  507339 pod_ready.go:92] pod "kube-apiserver-no-preload-666547" in "kube-system" namespace has status "Ready":"True"
	I0116 03:44:19.445875  507339 pod_ready.go:81] duration metric: took 7.641191ms waiting for pod "kube-apiserver-no-preload-666547" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:19.445889  507339 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-666547" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:19.452468  507339 pod_ready.go:92] pod "kube-controller-manager-no-preload-666547" in "kube-system" namespace has status "Ready":"True"
	I0116 03:44:19.452491  507339 pod_ready.go:81] duration metric: took 6.593311ms waiting for pod "kube-controller-manager-no-preload-666547" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:19.452500  507339 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dcmrn" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:19.459542  507339 pod_ready.go:92] pod "kube-proxy-dcmrn" in "kube-system" namespace has status "Ready":"True"
	I0116 03:44:19.459576  507339 pod_ready.go:81] duration metric: took 7.067817ms waiting for pod "kube-proxy-dcmrn" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:19.459591  507339 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-666547" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:19.966827  507339 pod_ready.go:92] pod "kube-scheduler-no-preload-666547" in "kube-system" namespace has status "Ready":"True"
	I0116 03:44:19.966867  507339 pod_ready.go:81] duration metric: took 507.26823ms waiting for pod "kube-scheduler-no-preload-666547" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:19.966878  507339 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:19.946145  507510 api_server.go:279] https://192.168.61.167:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0116 03:44:19.946209  507510 api_server.go:103] status: https://192.168.61.167:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0116 03:44:19.946230  507510 api_server.go:253] Checking apiserver healthz at https://192.168.61.167:8443/healthz ...
	I0116 03:44:20.259035  507510 api_server.go:279] https://192.168.61.167:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0116 03:44:20.259091  507510 api_server.go:103] status: https://192.168.61.167:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0116 03:44:20.259142  507510 api_server.go:253] Checking apiserver healthz at https://192.168.61.167:8443/healthz ...
	I0116 03:44:20.330196  507510 api_server.go:279] https://192.168.61.167:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0116 03:44:20.330237  507510 api_server.go:103] status: https://192.168.61.167:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0116 03:44:20.613624  507510 api_server.go:253] Checking apiserver healthz at https://192.168.61.167:8443/healthz ...
	I0116 03:44:20.621956  507510 api_server.go:279] https://192.168.61.167:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0116 03:44:20.622008  507510 api_server.go:103] status: https://192.168.61.167:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0116 03:44:21.113536  507510 api_server.go:253] Checking apiserver healthz at https://192.168.61.167:8443/healthz ...
	I0116 03:44:21.125326  507510 api_server.go:279] https://192.168.61.167:8443/healthz returned 200:
	ok
	I0116 03:44:21.137555  507510 api_server.go:141] control plane version: v1.16.0
	I0116 03:44:21.137602  507510 api_server.go:131] duration metric: took 10.524468396s to wait for apiserver health ...
	I0116 03:44:21.137616  507510 cni.go:84] Creating CNI manager for ""
	I0116 03:44:21.137625  507510 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:44:21.139682  507510 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 03:44:16.445685  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:16.446216  507257 main.go:141] libmachine: (embed-certs-615980) DBG | unable to find current IP address of domain embed-certs-615980 in network mk-embed-certs-615980
	I0116 03:44:16.446246  507257 main.go:141] libmachine: (embed-certs-615980) DBG | I0116 03:44:16.446137  508731 retry.go:31] will retry after 1.400972s: waiting for machine to come up
	I0116 03:44:17.848447  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:17.848964  507257 main.go:141] libmachine: (embed-certs-615980) DBG | unable to find current IP address of domain embed-certs-615980 in network mk-embed-certs-615980
	I0116 03:44:17.848991  507257 main.go:141] libmachine: (embed-certs-615980) DBG | I0116 03:44:17.848916  508731 retry.go:31] will retry after 2.293115324s: waiting for machine to come up
	I0116 03:44:20.145242  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:20.145899  507257 main.go:141] libmachine: (embed-certs-615980) DBG | unable to find current IP address of domain embed-certs-615980 in network mk-embed-certs-615980
	I0116 03:44:20.145933  507257 main.go:141] libmachine: (embed-certs-615980) DBG | I0116 03:44:20.145842  508731 retry.go:31] will retry after 2.158183619s: waiting for machine to come up
	I0116 03:44:18.744370  507889 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.696918616s)
	I0116 03:44:18.744426  507889 crio.go:451] Took 3.697118 seconds to extract the tarball
	I0116 03:44:18.744440  507889 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0116 03:44:18.792685  507889 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 03:44:18.868262  507889 crio.go:496] all images are preloaded for cri-o runtime.
	I0116 03:44:18.868291  507889 cache_images.go:84] Images are preloaded, skipping loading
	I0116 03:44:18.868382  507889 ssh_runner.go:195] Run: crio config
	I0116 03:44:18.954026  507889 cni.go:84] Creating CNI manager for ""
	I0116 03:44:18.954060  507889 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:44:18.954085  507889 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 03:44:18.954138  507889 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.236 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-434445 NodeName:default-k8s-diff-port-434445 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.236"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.236 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0116 03:44:18.954362  507889 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.236
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-434445"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.236
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.236"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 03:44:18.954483  507889 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-434445 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.236
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-434445 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0116 03:44:18.954557  507889 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0116 03:44:18.966046  507889 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 03:44:18.966143  507889 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0116 03:44:18.977441  507889 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I0116 03:44:18.997304  507889 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0116 03:44:19.016597  507889 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I0116 03:44:19.035635  507889 ssh_runner.go:195] Run: grep 192.168.50.236	control-plane.minikube.internal$ /etc/hosts
	I0116 03:44:19.039882  507889 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.236	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 03:44:19.053342  507889 certs.go:56] Setting up /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/default-k8s-diff-port-434445 for IP: 192.168.50.236
	I0116 03:44:19.053383  507889 certs.go:190] acquiring lock for shared ca certs: {Name:mkab5aeea3e0ed69f3592dc8f418e07c6075b130 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:44:19.053580  507889 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17965-468241/.minikube/ca.key
	I0116 03:44:19.053655  507889 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.key
	I0116 03:44:19.053773  507889 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/default-k8s-diff-port-434445/client.key
	I0116 03:44:19.053920  507889 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/default-k8s-diff-port-434445/apiserver.key.4e4dee8d
	I0116 03:44:19.053994  507889 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/default-k8s-diff-port-434445/proxy-client.key
	I0116 03:44:19.054154  507889 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478.pem (1338 bytes)
	W0116 03:44:19.054198  507889 certs.go:433] ignoring /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478_empty.pem, impossibly tiny 0 bytes
	I0116 03:44:19.054215  507889 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca-key.pem (1679 bytes)
	I0116 03:44:19.054249  507889 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem (1078 bytes)
	I0116 03:44:19.054286  507889 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem (1123 bytes)
	I0116 03:44:19.054318  507889 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem (1679 bytes)
	I0116 03:44:19.054373  507889 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem (1708 bytes)
	I0116 03:44:19.055259  507889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/default-k8s-diff-port-434445/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0116 03:44:19.086636  507889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/default-k8s-diff-port-434445/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0116 03:44:19.117759  507889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/default-k8s-diff-port-434445/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0116 03:44:19.144530  507889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/default-k8s-diff-port-434445/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0116 03:44:19.170423  507889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 03:44:19.198224  507889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0116 03:44:19.223514  507889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 03:44:19.250858  507889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 03:44:19.276922  507889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem --> /usr/share/ca-certificates/4754782.pem (1708 bytes)
	I0116 03:44:19.302621  507889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 03:44:19.330021  507889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478.pem --> /usr/share/ca-certificates/475478.pem (1338 bytes)
	I0116 03:44:19.358108  507889 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0116 03:44:19.379126  507889 ssh_runner.go:195] Run: openssl version
	I0116 03:44:19.386675  507889 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 03:44:19.398759  507889 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:44:19.404201  507889 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 02:35 /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:44:19.404283  507889 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:44:19.411067  507889 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 03:44:19.422608  507889 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/475478.pem && ln -fs /usr/share/ca-certificates/475478.pem /etc/ssl/certs/475478.pem"
	I0116 03:44:19.434422  507889 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/475478.pem
	I0116 03:44:19.440018  507889 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 02:44 /usr/share/ca-certificates/475478.pem
	I0116 03:44:19.440103  507889 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/475478.pem
	I0116 03:44:19.446469  507889 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/475478.pem /etc/ssl/certs/51391683.0"
	I0116 03:44:19.460130  507889 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4754782.pem && ln -fs /usr/share/ca-certificates/4754782.pem /etc/ssl/certs/4754782.pem"
	I0116 03:44:19.473886  507889 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4754782.pem
	I0116 03:44:19.478781  507889 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 02:44 /usr/share/ca-certificates/4754782.pem
	I0116 03:44:19.478858  507889 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4754782.pem
	I0116 03:44:19.484826  507889 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4754782.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 03:44:19.495710  507889 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 03:44:19.500842  507889 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0116 03:44:19.507646  507889 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0116 03:44:19.515247  507889 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0116 03:44:19.523964  507889 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0116 03:44:19.532379  507889 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0116 03:44:19.540067  507889 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0116 03:44:19.548614  507889 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-434445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-434445 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.50.236 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 03:44:19.548812  507889 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0116 03:44:19.548900  507889 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 03:44:19.595803  507889 cri.go:89] found id: ""
	I0116 03:44:19.595910  507889 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0116 03:44:19.610615  507889 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0116 03:44:19.610647  507889 kubeadm.go:636] restartCluster start
	I0116 03:44:19.610726  507889 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0116 03:44:19.624175  507889 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:19.625683  507889 kubeconfig.go:92] found "default-k8s-diff-port-434445" server: "https://192.168.50.236:8444"
	I0116 03:44:19.628685  507889 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0116 03:44:19.640309  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:19.640390  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:19.653938  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:20.141193  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:20.141285  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:20.154331  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:20.640562  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:20.640691  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:20.657774  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:21.141268  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:21.141371  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:21.158792  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:21.141315  507510 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 03:44:21.168450  507510 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 03:44:21.206907  507510 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 03:44:21.222998  507510 system_pods.go:59] 7 kube-system pods found
	I0116 03:44:21.223072  507510 system_pods.go:61] "coredns-5644d7b6d9-7q4wc" [003ba660-e3c5-4a98-be67-75e43dc32b37] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0116 03:44:21.223084  507510 system_pods.go:61] "etcd-old-k8s-version-696770" [b029f446-15b1-4720-af3a-b651b778fc0d] Running
	I0116 03:44:21.223094  507510 system_pods.go:61] "kube-apiserver-old-k8s-version-696770" [a9597e33-db8c-48e5-b119-d6d97d8d8e3f] Running
	I0116 03:44:21.223114  507510 system_pods.go:61] "kube-controller-manager-old-k8s-version-696770" [901fd518-04a1-4de0-baa2-08c7d57a587d] Running
	I0116 03:44:21.223123  507510 system_pods.go:61] "kube-proxy-9pfdj" [ac00ed93-abe8-4f53-8e63-fa63589fbf5c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0116 03:44:21.223134  507510 system_pods.go:61] "kube-scheduler-old-k8s-version-696770" [a8d74e76-6c22-4d82-b954-4025dff18279] Running
	I0116 03:44:21.223146  507510 system_pods.go:61] "storage-provisioner" [b04dacf9-8137-4f22-ae36-147d08fd9b60] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0116 03:44:21.223158  507510 system_pods.go:74] duration metric: took 16.220665ms to wait for pod list to return data ...
	I0116 03:44:21.223173  507510 node_conditions.go:102] verifying NodePressure condition ...
	I0116 03:44:21.228670  507510 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 03:44:21.228715  507510 node_conditions.go:123] node cpu capacity is 2
	I0116 03:44:21.228734  507510 node_conditions.go:105] duration metric: took 5.552228ms to run NodePressure ...
	I0116 03:44:21.228760  507510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:44:21.576565  507510 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0116 03:44:21.581017  507510 retry.go:31] will retry after 323.975879ms: kubelet not initialised
	I0116 03:44:21.914790  507510 retry.go:31] will retry after 258.393503ms: kubelet not initialised
	I0116 03:44:22.180592  507510 retry.go:31] will retry after 582.791922ms: kubelet not initialised
	I0116 03:44:22.769880  507510 retry.go:31] will retry after 961.779974ms: kubelet not initialised
	I0116 03:44:23.739015  507510 retry.go:31] will retry after 686.353156ms: kubelet not initialised
	I0116 03:44:24.431951  507510 retry.go:31] will retry after 2.073440094s: kubelet not initialised
	I0116 03:44:21.976301  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:23.977710  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:22.305212  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:22.305701  507257 main.go:141] libmachine: (embed-certs-615980) DBG | unable to find current IP address of domain embed-certs-615980 in network mk-embed-certs-615980
	I0116 03:44:22.305732  507257 main.go:141] libmachine: (embed-certs-615980) DBG | I0116 03:44:22.305662  508731 retry.go:31] will retry after 3.080436267s: waiting for machine to come up
	I0116 03:44:25.389414  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:25.389850  507257 main.go:141] libmachine: (embed-certs-615980) DBG | unable to find current IP address of domain embed-certs-615980 in network mk-embed-certs-615980
	I0116 03:44:25.389875  507257 main.go:141] libmachine: (embed-certs-615980) DBG | I0116 03:44:25.389828  508731 retry.go:31] will retry after 2.730339967s: waiting for machine to come up
	I0116 03:44:21.640823  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:21.641083  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:21.656391  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:22.141134  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:22.141242  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:22.157848  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:22.641247  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:22.641371  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:22.654425  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:23.140719  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:23.140827  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:23.153823  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:23.641193  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:23.641298  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:23.654061  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:24.141196  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:24.141290  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:24.161415  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:24.640416  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:24.640514  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:24.670258  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:25.140571  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:25.140673  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:25.157823  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:25.641188  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:25.641284  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:25.655917  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:26.141241  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:26.141357  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:26.157447  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:26.511961  507510 retry.go:31] will retry after 4.006598367s: kubelet not initialised
	I0116 03:44:26.473653  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:28.474914  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:28.122340  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:28.122704  507257 main.go:141] libmachine: (embed-certs-615980) DBG | unable to find current IP address of domain embed-certs-615980 in network mk-embed-certs-615980
	I0116 03:44:28.122735  507257 main.go:141] libmachine: (embed-certs-615980) DBG | I0116 03:44:28.122676  508731 retry.go:31] will retry after 4.170800657s: waiting for machine to come up
	I0116 03:44:26.641408  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:26.641510  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:26.654505  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:27.141033  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:27.141129  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:27.154208  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:27.640701  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:27.640785  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:27.653964  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:28.141330  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:28.141406  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:28.153419  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:28.640986  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:28.641076  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:28.654357  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:29.141250  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:29.141335  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:29.154899  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:29.640619  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:29.640717  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:29.654653  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:29.654692  507889 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0116 03:44:29.654701  507889 kubeadm.go:1135] stopping kube-system containers ...
	I0116 03:44:29.654713  507889 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0116 03:44:29.654769  507889 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 03:44:29.697617  507889 cri.go:89] found id: ""
	I0116 03:44:29.697719  507889 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0116 03:44:29.719069  507889 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 03:44:29.735791  507889 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 03:44:29.735872  507889 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 03:44:29.748788  507889 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0116 03:44:29.748823  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:44:29.874894  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:44:30.787232  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:44:31.009234  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:44:31.136220  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:44:31.215330  507889 api_server.go:52] waiting for apiserver process to appear ...
	I0116 03:44:31.215416  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:44:30.526372  507510 retry.go:31] will retry after 4.363756335s: kubelet not initialised
	I0116 03:44:32.295936  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:32.296442  507257 main.go:141] libmachine: (embed-certs-615980) Found IP for machine: 192.168.72.159
	I0116 03:44:32.296483  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has current primary IP address 192.168.72.159 and MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:32.296499  507257 main.go:141] libmachine: (embed-certs-615980) Reserving static IP address...
	I0116 03:44:32.297078  507257 main.go:141] libmachine: (embed-certs-615980) DBG | found host DHCP lease matching {name: "embed-certs-615980", mac: "52:54:00:d4:a6:40", ip: "192.168.72.159"} in network mk-embed-certs-615980: {Iface:virbr4 ExpiryTime:2024-01-16 04:44:24 +0000 UTC Type:0 Mac:52:54:00:d4:a6:40 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-615980 Clientid:01:52:54:00:d4:a6:40}
	I0116 03:44:32.297121  507257 main.go:141] libmachine: (embed-certs-615980) Reserved static IP address: 192.168.72.159
	I0116 03:44:32.297140  507257 main.go:141] libmachine: (embed-certs-615980) DBG | skip adding static IP to network mk-embed-certs-615980 - found existing host DHCP lease matching {name: "embed-certs-615980", mac: "52:54:00:d4:a6:40", ip: "192.168.72.159"}
	I0116 03:44:32.297160  507257 main.go:141] libmachine: (embed-certs-615980) DBG | Getting to WaitForSSH function...
	I0116 03:44:32.297179  507257 main.go:141] libmachine: (embed-certs-615980) Waiting for SSH to be available...
	I0116 03:44:32.299440  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:32.299839  507257 main.go:141] libmachine: (embed-certs-615980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:a6:40", ip: ""} in network mk-embed-certs-615980: {Iface:virbr4 ExpiryTime:2024-01-16 04:44:24 +0000 UTC Type:0 Mac:52:54:00:d4:a6:40 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-615980 Clientid:01:52:54:00:d4:a6:40}
	I0116 03:44:32.299870  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined IP address 192.168.72.159 and MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:32.300064  507257 main.go:141] libmachine: (embed-certs-615980) DBG | Using SSH client type: external
	I0116 03:44:32.300098  507257 main.go:141] libmachine: (embed-certs-615980) DBG | Using SSH private key: /home/jenkins/minikube-integration/17965-468241/.minikube/machines/embed-certs-615980/id_rsa (-rw-------)
	I0116 03:44:32.300133  507257 main.go:141] libmachine: (embed-certs-615980) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.159 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17965-468241/.minikube/machines/embed-certs-615980/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0116 03:44:32.300153  507257 main.go:141] libmachine: (embed-certs-615980) DBG | About to run SSH command:
	I0116 03:44:32.300172  507257 main.go:141] libmachine: (embed-certs-615980) DBG | exit 0
	I0116 03:44:32.396718  507257 main.go:141] libmachine: (embed-certs-615980) DBG | SSH cmd err, output: <nil>: 
	I0116 03:44:32.397111  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetConfigRaw
	I0116 03:44:32.397901  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetIP
	I0116 03:44:32.400997  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:32.401502  507257 main.go:141] libmachine: (embed-certs-615980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:a6:40", ip: ""} in network mk-embed-certs-615980: {Iface:virbr4 ExpiryTime:2024-01-16 04:44:24 +0000 UTC Type:0 Mac:52:54:00:d4:a6:40 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-615980 Clientid:01:52:54:00:d4:a6:40}
	I0116 03:44:32.401540  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined IP address 192.168.72.159 and MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:32.402036  507257 profile.go:148] Saving config to /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/embed-certs-615980/config.json ...
	I0116 03:44:32.402259  507257 machine.go:88] provisioning docker machine ...
	I0116 03:44:32.402281  507257 main.go:141] libmachine: (embed-certs-615980) Calling .DriverName
	I0116 03:44:32.402539  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetMachineName
	I0116 03:44:32.402759  507257 buildroot.go:166] provisioning hostname "embed-certs-615980"
	I0116 03:44:32.402786  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetMachineName
	I0116 03:44:32.402966  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHHostname
	I0116 03:44:32.405935  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:32.406344  507257 main.go:141] libmachine: (embed-certs-615980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:a6:40", ip: ""} in network mk-embed-certs-615980: {Iface:virbr4 ExpiryTime:2024-01-16 04:44:24 +0000 UTC Type:0 Mac:52:54:00:d4:a6:40 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-615980 Clientid:01:52:54:00:d4:a6:40}
	I0116 03:44:32.406384  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined IP address 192.168.72.159 and MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:32.406585  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHPort
	I0116 03:44:32.406821  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHKeyPath
	I0116 03:44:32.407054  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHKeyPath
	I0116 03:44:32.407219  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHUsername
	I0116 03:44:32.407399  507257 main.go:141] libmachine: Using SSH client type: native
	I0116 03:44:32.407754  507257 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I0116 03:44:32.407768  507257 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-615980 && echo "embed-certs-615980" | sudo tee /etc/hostname
	I0116 03:44:32.561584  507257 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-615980
	
	I0116 03:44:32.561618  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHHostname
	I0116 03:44:32.564566  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:32.565004  507257 main.go:141] libmachine: (embed-certs-615980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:a6:40", ip: ""} in network mk-embed-certs-615980: {Iface:virbr4 ExpiryTime:2024-01-16 04:44:24 +0000 UTC Type:0 Mac:52:54:00:d4:a6:40 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-615980 Clientid:01:52:54:00:d4:a6:40}
	I0116 03:44:32.565033  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined IP address 192.168.72.159 and MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:32.565232  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHPort
	I0116 03:44:32.565481  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHKeyPath
	I0116 03:44:32.565672  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHKeyPath
	I0116 03:44:32.565843  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHUsername
	I0116 03:44:32.566045  507257 main.go:141] libmachine: Using SSH client type: native
	I0116 03:44:32.566521  507257 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I0116 03:44:32.566549  507257 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-615980' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-615980/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-615980' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 03:44:32.718945  507257 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 03:44:32.719005  507257 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17965-468241/.minikube CaCertPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17965-468241/.minikube}
	I0116 03:44:32.719037  507257 buildroot.go:174] setting up certificates
	I0116 03:44:32.719051  507257 provision.go:83] configureAuth start
	I0116 03:44:32.719081  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetMachineName
	I0116 03:44:32.719397  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetIP
	I0116 03:44:32.722474  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:32.722938  507257 main.go:141] libmachine: (embed-certs-615980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:a6:40", ip: ""} in network mk-embed-certs-615980: {Iface:virbr4 ExpiryTime:2024-01-16 04:44:24 +0000 UTC Type:0 Mac:52:54:00:d4:a6:40 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-615980 Clientid:01:52:54:00:d4:a6:40}
	I0116 03:44:32.722972  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined IP address 192.168.72.159 and MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:32.723136  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHHostname
	I0116 03:44:32.725821  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:32.726246  507257 main.go:141] libmachine: (embed-certs-615980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:a6:40", ip: ""} in network mk-embed-certs-615980: {Iface:virbr4 ExpiryTime:2024-01-16 04:44:24 +0000 UTC Type:0 Mac:52:54:00:d4:a6:40 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-615980 Clientid:01:52:54:00:d4:a6:40}
	I0116 03:44:32.726277  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined IP address 192.168.72.159 and MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:32.726448  507257 provision.go:138] copyHostCerts
	I0116 03:44:32.726535  507257 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem, removing ...
	I0116 03:44:32.726622  507257 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem
	I0116 03:44:32.726769  507257 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem (1078 bytes)
	I0116 03:44:32.726971  507257 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem, removing ...
	I0116 03:44:32.726983  507257 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem
	I0116 03:44:32.727015  507257 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem (1123 bytes)
	I0116 03:44:32.727099  507257 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem, removing ...
	I0116 03:44:32.727116  507257 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem
	I0116 03:44:32.727144  507257 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem (1679 bytes)
	I0116 03:44:32.727212  507257 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca-key.pem org=jenkins.embed-certs-615980 san=[192.168.72.159 192.168.72.159 localhost 127.0.0.1 minikube embed-certs-615980]
	I0116 03:44:32.921694  507257 provision.go:172] copyRemoteCerts
	I0116 03:44:32.921764  507257 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 03:44:32.921798  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHHostname
	I0116 03:44:32.924951  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:32.925329  507257 main.go:141] libmachine: (embed-certs-615980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:a6:40", ip: ""} in network mk-embed-certs-615980: {Iface:virbr4 ExpiryTime:2024-01-16 04:44:24 +0000 UTC Type:0 Mac:52:54:00:d4:a6:40 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-615980 Clientid:01:52:54:00:d4:a6:40}
	I0116 03:44:32.925362  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined IP address 192.168.72.159 and MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:32.925534  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHPort
	I0116 03:44:32.925855  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHKeyPath
	I0116 03:44:32.926135  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHUsername
	I0116 03:44:32.926390  507257 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/embed-certs-615980/id_rsa Username:docker}
	I0116 03:44:33.025856  507257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0116 03:44:33.055403  507257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0116 03:44:33.087908  507257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0116 03:44:33.116847  507257 provision.go:86] duration metric: configureAuth took 397.777297ms
	I0116 03:44:33.116886  507257 buildroot.go:189] setting minikube options for container-runtime
	I0116 03:44:33.117136  507257 config.go:182] Loaded profile config "embed-certs-615980": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 03:44:33.117267  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHHostname
	I0116 03:44:33.120452  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:33.120915  507257 main.go:141] libmachine: (embed-certs-615980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:a6:40", ip: ""} in network mk-embed-certs-615980: {Iface:virbr4 ExpiryTime:2024-01-16 04:44:24 +0000 UTC Type:0 Mac:52:54:00:d4:a6:40 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-615980 Clientid:01:52:54:00:d4:a6:40}
	I0116 03:44:33.120949  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined IP address 192.168.72.159 and MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:33.121189  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHPort
	I0116 03:44:33.121442  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHKeyPath
	I0116 03:44:33.121636  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHKeyPath
	I0116 03:44:33.121778  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHUsername
	I0116 03:44:33.121966  507257 main.go:141] libmachine: Using SSH client type: native
	I0116 03:44:33.122333  507257 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I0116 03:44:33.122359  507257 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 03:44:33.486009  507257 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 03:44:33.486147  507257 machine.go:91] provisioned docker machine in 1.083869863s
	I0116 03:44:33.486202  507257 start.go:300] post-start starting for "embed-certs-615980" (driver="kvm2")
	I0116 03:44:33.486239  507257 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 03:44:33.486282  507257 main.go:141] libmachine: (embed-certs-615980) Calling .DriverName
	I0116 03:44:33.486719  507257 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 03:44:33.486755  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHHostname
	I0116 03:44:33.490226  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:33.490676  507257 main.go:141] libmachine: (embed-certs-615980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:a6:40", ip: ""} in network mk-embed-certs-615980: {Iface:virbr4 ExpiryTime:2024-01-16 04:44:24 +0000 UTC Type:0 Mac:52:54:00:d4:a6:40 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-615980 Clientid:01:52:54:00:d4:a6:40}
	I0116 03:44:33.490743  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined IP address 192.168.72.159 and MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:33.490863  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHPort
	I0116 03:44:33.491117  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHKeyPath
	I0116 03:44:33.491299  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHUsername
	I0116 03:44:33.491478  507257 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/embed-certs-615980/id_rsa Username:docker}
	I0116 03:44:33.590039  507257 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 03:44:33.596095  507257 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 03:44:33.596124  507257 filesync.go:126] Scanning /home/jenkins/minikube-integration/17965-468241/.minikube/addons for local assets ...
	I0116 03:44:33.596206  507257 filesync.go:126] Scanning /home/jenkins/minikube-integration/17965-468241/.minikube/files for local assets ...
	I0116 03:44:33.596295  507257 filesync.go:149] local asset: /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem -> 4754782.pem in /etc/ssl/certs
	I0116 03:44:33.596437  507257 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 03:44:33.609260  507257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem --> /etc/ssl/certs/4754782.pem (1708 bytes)
	I0116 03:44:33.642578  507257 start.go:303] post-start completed in 156.336318ms
	I0116 03:44:33.642651  507257 fix.go:56] fixHost completed within 22.644969219s
	I0116 03:44:33.642703  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHHostname
	I0116 03:44:33.645616  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:33.645988  507257 main.go:141] libmachine: (embed-certs-615980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:a6:40", ip: ""} in network mk-embed-certs-615980: {Iface:virbr4 ExpiryTime:2024-01-16 04:44:24 +0000 UTC Type:0 Mac:52:54:00:d4:a6:40 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-615980 Clientid:01:52:54:00:d4:a6:40}
	I0116 03:44:33.646017  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined IP address 192.168.72.159 and MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:33.646277  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHPort
	I0116 03:44:33.646514  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHKeyPath
	I0116 03:44:33.646720  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHKeyPath
	I0116 03:44:33.646910  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHUsername
	I0116 03:44:33.647179  507257 main.go:141] libmachine: Using SSH client type: native
	I0116 03:44:33.647655  507257 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I0116 03:44:33.647682  507257 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0116 03:44:33.781805  507257 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705376673.706960834
	
	I0116 03:44:33.781839  507257 fix.go:206] guest clock: 1705376673.706960834
	I0116 03:44:33.781850  507257 fix.go:219] Guest: 2024-01-16 03:44:33.706960834 +0000 UTC Remote: 2024-01-16 03:44:33.642657737 +0000 UTC m=+367.429386706 (delta=64.303097ms)
	I0116 03:44:33.781879  507257 fix.go:190] guest clock delta is within tolerance: 64.303097ms
	I0116 03:44:33.781890  507257 start.go:83] releasing machines lock for "embed-certs-615980", held for 22.784266536s
	I0116 03:44:33.781917  507257 main.go:141] libmachine: (embed-certs-615980) Calling .DriverName
	I0116 03:44:33.782225  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetIP
	I0116 03:44:33.785113  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:33.785495  507257 main.go:141] libmachine: (embed-certs-615980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:a6:40", ip: ""} in network mk-embed-certs-615980: {Iface:virbr4 ExpiryTime:2024-01-16 04:44:24 +0000 UTC Type:0 Mac:52:54:00:d4:a6:40 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-615980 Clientid:01:52:54:00:d4:a6:40}
	I0116 03:44:33.785530  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined IP address 192.168.72.159 and MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:33.785718  507257 main.go:141] libmachine: (embed-certs-615980) Calling .DriverName
	I0116 03:44:33.786427  507257 main.go:141] libmachine: (embed-certs-615980) Calling .DriverName
	I0116 03:44:33.786655  507257 main.go:141] libmachine: (embed-certs-615980) Calling .DriverName
	I0116 03:44:33.786751  507257 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 03:44:33.786799  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHHostname
	I0116 03:44:33.786938  507257 ssh_runner.go:195] Run: cat /version.json
	I0116 03:44:33.786967  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHHostname
	I0116 03:44:33.790084  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:33.790288  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:33.790454  507257 main.go:141] libmachine: (embed-certs-615980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:a6:40", ip: ""} in network mk-embed-certs-615980: {Iface:virbr4 ExpiryTime:2024-01-16 04:44:24 +0000 UTC Type:0 Mac:52:54:00:d4:a6:40 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-615980 Clientid:01:52:54:00:d4:a6:40}
	I0116 03:44:33.790485  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined IP address 192.168.72.159 and MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:33.790655  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHPort
	I0116 03:44:33.790787  507257 main.go:141] libmachine: (embed-certs-615980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:a6:40", ip: ""} in network mk-embed-certs-615980: {Iface:virbr4 ExpiryTime:2024-01-16 04:44:24 +0000 UTC Type:0 Mac:52:54:00:d4:a6:40 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-615980 Clientid:01:52:54:00:d4:a6:40}
	I0116 03:44:33.790831  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined IP address 192.168.72.159 and MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:33.790899  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHKeyPath
	I0116 03:44:33.791007  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHPort
	I0116 03:44:33.791091  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHUsername
	I0116 03:44:33.791193  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHKeyPath
	I0116 03:44:33.791269  507257 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/embed-certs-615980/id_rsa Username:docker}
	I0116 03:44:33.791363  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHUsername
	I0116 03:44:33.791515  507257 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/embed-certs-615980/id_rsa Username:docker}
	I0116 03:44:33.907036  507257 ssh_runner.go:195] Run: systemctl --version
	I0116 03:44:33.913776  507257 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 03:44:34.062888  507257 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0116 03:44:34.070435  507257 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 03:44:34.070539  507257 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 03:44:34.091957  507257 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0116 03:44:34.091993  507257 start.go:475] detecting cgroup driver to use...
	I0116 03:44:34.092099  507257 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 03:44:34.108007  507257 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 03:44:34.123223  507257 docker.go:217] disabling cri-docker service (if available) ...
	I0116 03:44:34.123314  507257 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 03:44:34.141242  507257 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 03:44:34.157053  507257 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 03:44:34.274186  507257 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 03:44:34.427694  507257 docker.go:233] disabling docker service ...
	I0116 03:44:34.427785  507257 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 03:44:34.442789  507257 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 03:44:34.459761  507257 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 03:44:34.592453  507257 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 03:44:34.715991  507257 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 03:44:34.732175  507257 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 03:44:34.751885  507257 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0116 03:44:34.751989  507257 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:44:34.763769  507257 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 03:44:34.763853  507257 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:44:34.774444  507257 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:44:34.784975  507257 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:44:34.797634  507257 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 03:44:34.810962  507257 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 03:44:34.822224  507257 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0116 03:44:34.822314  507257 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0116 03:44:34.840500  507257 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 03:44:34.852285  507257 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 03:44:34.970828  507257 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 03:44:35.163097  507257 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 03:44:35.163242  507257 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 03:44:35.169041  507257 start.go:543] Will wait 60s for crictl version
	I0116 03:44:35.169150  507257 ssh_runner.go:195] Run: which crictl
	I0116 03:44:35.173367  507257 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 03:44:35.224951  507257 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0116 03:44:35.225043  507257 ssh_runner.go:195] Run: crio --version
	I0116 03:44:35.275230  507257 ssh_runner.go:195] Run: crio --version
	I0116 03:44:35.329852  507257 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0116 03:44:30.981714  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:33.476735  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:35.480715  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:35.331327  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetIP
	I0116 03:44:35.334148  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:35.334618  507257 main.go:141] libmachine: (embed-certs-615980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:a6:40", ip: ""} in network mk-embed-certs-615980: {Iface:virbr4 ExpiryTime:2024-01-16 04:44:24 +0000 UTC Type:0 Mac:52:54:00:d4:a6:40 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-615980 Clientid:01:52:54:00:d4:a6:40}
	I0116 03:44:35.334674  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined IP address 192.168.72.159 and MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:35.335166  507257 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0116 03:44:35.341389  507257 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 03:44:35.358757  507257 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 03:44:35.358866  507257 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 03:44:35.407869  507257 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0116 03:44:35.407983  507257 ssh_runner.go:195] Run: which lz4
	I0116 03:44:35.412533  507257 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0116 03:44:35.417266  507257 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0116 03:44:35.417303  507257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0116 03:44:31.715897  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:44:32.215978  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:44:32.716439  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:44:33.215609  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:44:33.715785  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:44:33.738611  507889 api_server.go:72] duration metric: took 2.523281585s to wait for apiserver process to appear ...
	I0116 03:44:33.738642  507889 api_server.go:88] waiting for apiserver healthz status ...
	I0116 03:44:33.738663  507889 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8444/healthz ...
	I0116 03:44:37.601011  507889 api_server.go:279] https://192.168.50.236:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 03:44:37.601052  507889 api_server.go:103] status: https://192.168.50.236:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 03:44:37.601072  507889 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8444/healthz ...
	I0116 03:44:37.678390  507889 api_server.go:279] https://192.168.50.236:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 03:44:37.678428  507889 api_server.go:103] status: https://192.168.50.236:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 03:44:37.739725  507889 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8444/healthz ...
	I0116 03:44:37.767384  507889 api_server.go:279] https://192.168.50.236:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 03:44:37.767425  507889 api_server.go:103] status: https://192.168.50.236:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 03:44:38.238992  507889 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8444/healthz ...
	I0116 03:44:38.253946  507889 api_server.go:279] https://192.168.50.236:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 03:44:38.253991  507889 api_server.go:103] status: https://192.168.50.236:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 03:44:38.738786  507889 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8444/healthz ...
	I0116 03:44:38.749091  507889 api_server.go:279] https://192.168.50.236:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 03:44:38.749135  507889 api_server.go:103] status: https://192.168.50.236:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 03:44:39.239814  507889 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8444/healthz ...
	I0116 03:44:39.245859  507889 api_server.go:279] https://192.168.50.236:8444/healthz returned 200:
	ok
	I0116 03:44:39.259198  507889 api_server.go:141] control plane version: v1.28.4
	I0116 03:44:39.259250  507889 api_server.go:131] duration metric: took 5.520598732s to wait for apiserver health ...
	I0116 03:44:39.259265  507889 cni.go:84] Creating CNI manager for ""
	I0116 03:44:39.259277  507889 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:44:39.261389  507889 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 03:44:34.897727  507510 retry.go:31] will retry after 6.879493351s: kubelet not initialised
	I0116 03:44:37.975671  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:39.979781  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:37.524763  507257 crio.go:444] Took 2.112278 seconds to copy over tarball
	I0116 03:44:37.524843  507257 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0116 03:44:40.706515  507257 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.181629969s)
	I0116 03:44:40.706559  507257 crio.go:451] Took 3.181765 seconds to extract the tarball
	I0116 03:44:40.706574  507257 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0116 03:44:40.751207  507257 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 03:44:40.905548  507257 crio.go:496] all images are preloaded for cri-o runtime.
	I0116 03:44:40.905578  507257 cache_images.go:84] Images are preloaded, skipping loading
	I0116 03:44:40.905659  507257 ssh_runner.go:195] Run: crio config
	I0116 03:44:40.965159  507257 cni.go:84] Creating CNI manager for ""
	I0116 03:44:40.965194  507257 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:44:40.965220  507257 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 03:44:40.965263  507257 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.159 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-615980 NodeName:embed-certs-615980 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.159"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.159 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0116 03:44:40.965474  507257 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.159
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-615980"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.159
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.159"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 03:44:40.965578  507257 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-615980 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.159
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-615980 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0116 03:44:40.965634  507257 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0116 03:44:40.976015  507257 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 03:44:40.976153  507257 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0116 03:44:40.986610  507257 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0116 03:44:41.005297  507257 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0116 03:44:41.026383  507257 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0116 03:44:41.046554  507257 ssh_runner.go:195] Run: grep 192.168.72.159	control-plane.minikube.internal$ /etc/hosts
	I0116 03:44:41.050940  507257 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.159	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 03:44:41.064516  507257 certs.go:56] Setting up /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/embed-certs-615980 for IP: 192.168.72.159
	I0116 03:44:41.064568  507257 certs.go:190] acquiring lock for shared ca certs: {Name:mkab5aeea3e0ed69f3592dc8f418e07c6075b130 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:44:41.064749  507257 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17965-468241/.minikube/ca.key
	I0116 03:44:41.064813  507257 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.key
	I0116 03:44:41.064917  507257 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/embed-certs-615980/client.key
	I0116 03:44:41.064989  507257 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/embed-certs-615980/apiserver.key.fc98a751
	I0116 03:44:41.065044  507257 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/embed-certs-615980/proxy-client.key
	I0116 03:44:41.065202  507257 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478.pem (1338 bytes)
	W0116 03:44:41.065241  507257 certs.go:433] ignoring /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478_empty.pem, impossibly tiny 0 bytes
	I0116 03:44:41.065257  507257 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca-key.pem (1679 bytes)
	I0116 03:44:41.065294  507257 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem (1078 bytes)
	I0116 03:44:41.065331  507257 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem (1123 bytes)
	I0116 03:44:41.065374  507257 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem (1679 bytes)
	I0116 03:44:41.065432  507257 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem (1708 bytes)
	I0116 03:44:41.066147  507257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/embed-certs-615980/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0116 03:44:41.092714  507257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/embed-certs-615980/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0116 03:44:41.119109  507257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/embed-certs-615980/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0116 03:44:41.147059  507257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/embed-certs-615980/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0116 03:44:41.176357  507257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 03:44:41.202082  507257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0116 03:44:41.228263  507257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 03:44:41.252892  507257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 03:44:39.263119  507889 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 03:44:39.290175  507889 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 03:44:39.319009  507889 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 03:44:39.341195  507889 system_pods.go:59] 9 kube-system pods found
	I0116 03:44:39.341251  507889 system_pods.go:61] "coredns-5dd5756b68-f8shl" [18bddcd6-4305-4856-b590-e16c362768e3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0116 03:44:39.341264  507889 system_pods.go:61] "coredns-5dd5756b68-pmx8n" [9d3c9941-8938-4d4a-b6d8-9b542ca6f1ca] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0116 03:44:39.341280  507889 system_pods.go:61] "etcd-default-k8s-diff-port-434445" [a2327159-d034-4f62-a8f5-14062a41c75d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0116 03:44:39.341293  507889 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-434445" [75c5fc5e-86b1-42cf-bb5c-8a1f8b4e13db] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0116 03:44:39.341310  507889 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-434445" [2568dd4f-124d-4c4f-8651-3b89b2b54983] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0116 03:44:39.341323  507889 system_pods.go:61] "kube-proxy-dcbqg" [eba1f9bf-6aa7-40cd-b57c-745c2d0cc414] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0116 03:44:39.341335  507889 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-434445" [479952a1-8c3d-49b4-bd22-fef988e6b83d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0116 03:44:39.341353  507889 system_pods.go:61] "metrics-server-57f55c9bc5-894n2" [46e4892a-d026-4a9d-88bc-128e92848808] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:44:39.341369  507889 system_pods.go:61] "storage-provisioner" [16fd4585-3d75-40c3-a28d-4134375f4e3d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0116 03:44:39.341391  507889 system_pods.go:74] duration metric: took 22.354405ms to wait for pod list to return data ...
	I0116 03:44:39.341403  507889 node_conditions.go:102] verifying NodePressure condition ...
	I0116 03:44:39.349904  507889 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 03:44:39.349954  507889 node_conditions.go:123] node cpu capacity is 2
	I0116 03:44:39.349972  507889 node_conditions.go:105] duration metric: took 8.557095ms to run NodePressure ...
	I0116 03:44:39.350000  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:44:39.798882  507889 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0116 03:44:39.816480  507889 kubeadm.go:787] kubelet initialised
	I0116 03:44:39.816514  507889 kubeadm.go:788] duration metric: took 17.598017ms waiting for restarted kubelet to initialise ...
	I0116 03:44:39.816527  507889 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:44:39.834946  507889 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-f8shl" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:39.854785  507889 pod_ready.go:97] node "default-k8s-diff-port-434445" hosting pod "coredns-5dd5756b68-f8shl" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-434445" has status "Ready":"False"
	I0116 03:44:39.854832  507889 pod_ready.go:81] duration metric: took 19.846427ms waiting for pod "coredns-5dd5756b68-f8shl" in "kube-system" namespace to be "Ready" ...
	E0116 03:44:39.854846  507889 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-434445" hosting pod "coredns-5dd5756b68-f8shl" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-434445" has status "Ready":"False"
	I0116 03:44:39.854864  507889 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-pmx8n" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:39.888659  507889 pod_ready.go:97] node "default-k8s-diff-port-434445" hosting pod "coredns-5dd5756b68-pmx8n" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-434445" has status "Ready":"False"
	I0116 03:44:39.888703  507889 pod_ready.go:81] duration metric: took 33.827201ms waiting for pod "coredns-5dd5756b68-pmx8n" in "kube-system" namespace to be "Ready" ...
	E0116 03:44:39.888718  507889 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-434445" hosting pod "coredns-5dd5756b68-pmx8n" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-434445" has status "Ready":"False"
	I0116 03:44:39.888728  507889 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-434445" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:39.897638  507889 pod_ready.go:97] node "default-k8s-diff-port-434445" hosting pod "etcd-default-k8s-diff-port-434445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-434445" has status "Ready":"False"
	I0116 03:44:39.897674  507889 pod_ready.go:81] duration metric: took 8.927103ms waiting for pod "etcd-default-k8s-diff-port-434445" in "kube-system" namespace to be "Ready" ...
	E0116 03:44:39.897693  507889 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-434445" hosting pod "etcd-default-k8s-diff-port-434445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-434445" has status "Ready":"False"
	I0116 03:44:39.897701  507889 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-434445" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:39.919418  507889 pod_ready.go:97] node "default-k8s-diff-port-434445" hosting pod "kube-apiserver-default-k8s-diff-port-434445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-434445" has status "Ready":"False"
	I0116 03:44:39.919465  507889 pod_ready.go:81] duration metric: took 21.753159ms waiting for pod "kube-apiserver-default-k8s-diff-port-434445" in "kube-system" namespace to be "Ready" ...
	E0116 03:44:39.919495  507889 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-434445" hosting pod "kube-apiserver-default-k8s-diff-port-434445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-434445" has status "Ready":"False"
	I0116 03:44:39.919505  507889 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-434445" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:40.203370  507889 pod_ready.go:97] node "default-k8s-diff-port-434445" hosting pod "kube-controller-manager-default-k8s-diff-port-434445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-434445" has status "Ready":"False"
	I0116 03:44:40.203411  507889 pod_ready.go:81] duration metric: took 283.893646ms waiting for pod "kube-controller-manager-default-k8s-diff-port-434445" in "kube-system" namespace to be "Ready" ...
	E0116 03:44:40.203428  507889 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-434445" hosting pod "kube-controller-manager-default-k8s-diff-port-434445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-434445" has status "Ready":"False"
	I0116 03:44:40.203440  507889 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-dcbqg" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:41.417889  507889 pod_ready.go:97] node "default-k8s-diff-port-434445" hosting pod "kube-proxy-dcbqg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-434445" has status "Ready":"False"
	I0116 03:44:41.418011  507889 pod_ready.go:81] duration metric: took 1.214559235s waiting for pod "kube-proxy-dcbqg" in "kube-system" namespace to be "Ready" ...
	E0116 03:44:41.418033  507889 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-434445" hosting pod "kube-proxy-dcbqg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-434445" has status "Ready":"False"
	I0116 03:44:41.418043  507889 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-434445" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:41.425177  507889 pod_ready.go:97] node "default-k8s-diff-port-434445" hosting pod "kube-scheduler-default-k8s-diff-port-434445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-434445" has status "Ready":"False"
	I0116 03:44:41.425208  507889 pod_ready.go:81] duration metric: took 7.15251ms waiting for pod "kube-scheduler-default-k8s-diff-port-434445" in "kube-system" namespace to be "Ready" ...
	E0116 03:44:41.425220  507889 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-434445" hosting pod "kube-scheduler-default-k8s-diff-port-434445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-434445" has status "Ready":"False"
	I0116 03:44:41.425226  507889 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:41.431059  507889 pod_ready.go:97] node "default-k8s-diff-port-434445" hosting pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-434445" has status "Ready":"False"
	I0116 03:44:41.431103  507889 pod_ready.go:81] duration metric: took 5.869165ms waiting for pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace to be "Ready" ...
	E0116 03:44:41.431115  507889 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-434445" hosting pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-434445" has status "Ready":"False"
	I0116 03:44:41.431122  507889 pod_ready.go:38] duration metric: took 1.614582832s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:44:41.431139  507889 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0116 03:44:41.445099  507889 ops.go:34] apiserver oom_adj: -16
	I0116 03:44:41.445129  507889 kubeadm.go:640] restartCluster took 21.83447374s
	I0116 03:44:41.445141  507889 kubeadm.go:406] StartCluster complete in 21.896543184s
	I0116 03:44:41.445168  507889 settings.go:142] acquiring lock: {Name:mk3ded99c06d285b174e76622bb0741c8dd2d8fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:44:41.445265  507889 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17965-468241/kubeconfig
	I0116 03:44:41.447590  507889 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-468241/kubeconfig: {Name:mkac3c63bbcba9b4fa1bea3480797757d703fb9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:44:41.544520  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0116 03:44:41.544743  507889 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0116 03:44:41.544842  507889 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-434445"
	I0116 03:44:41.544858  507889 config.go:182] Loaded profile config "default-k8s-diff-port-434445": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 03:44:41.544875  507889 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-434445"
	I0116 03:44:41.544891  507889 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-434445"
	W0116 03:44:41.544899  507889 addons.go:243] addon metrics-server should already be in state true
	I0116 03:44:41.544865  507889 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-434445"
	W0116 03:44:41.544915  507889 addons.go:243] addon storage-provisioner should already be in state true
	I0116 03:44:41.544971  507889 host.go:66] Checking if "default-k8s-diff-port-434445" exists ...
	I0116 03:44:41.544973  507889 host.go:66] Checking if "default-k8s-diff-port-434445" exists ...
	I0116 03:44:41.544862  507889 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-434445"
	I0116 03:44:41.545107  507889 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-434445"
	I0116 03:44:41.545473  507889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:44:41.545479  507889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:44:41.545505  507889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:44:41.545673  507889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:44:41.562983  507889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44387
	I0116 03:44:41.562984  507889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35715
	I0116 03:44:41.563677  507889 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:44:41.563684  507889 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:44:41.564352  507889 main.go:141] libmachine: Using API Version  1
	I0116 03:44:41.564382  507889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:44:41.564540  507889 main.go:141] libmachine: Using API Version  1
	I0116 03:44:41.564569  507889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:44:41.564753  507889 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:44:41.564937  507889 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:44:41.565113  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetState
	I0116 03:44:41.565350  507889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:44:41.565418  507889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:44:41.569050  507889 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-434445"
	W0116 03:44:41.569091  507889 addons.go:243] addon default-storageclass should already be in state true
	I0116 03:44:41.569125  507889 host.go:66] Checking if "default-k8s-diff-port-434445" exists ...
	I0116 03:44:41.569554  507889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:44:41.569613  507889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:44:41.584107  507889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33349
	I0116 03:44:41.584756  507889 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:44:41.585422  507889 main.go:141] libmachine: Using API Version  1
	I0116 03:44:41.585457  507889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:44:41.585634  507889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42645
	I0116 03:44:41.585856  507889 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:44:41.586123  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetState
	I0116 03:44:41.586162  507889 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:44:41.586636  507889 main.go:141] libmachine: Using API Version  1
	I0116 03:44:41.586663  507889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:44:41.587080  507889 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:44:41.587688  507889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:44:41.587743  507889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:44:41.588214  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .DriverName
	I0116 03:44:41.606456  507889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40289
	I0116 03:44:41.644090  507889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:44:41.819945  507889 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:44:41.929214  507889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:44:41.929680  507889 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:44:42.246642  507889 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 03:44:42.246665  507889 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0116 03:44:42.246696  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHHostname
	I0116 03:44:42.247294  507889 main.go:141] libmachine: Using API Version  1
	I0116 03:44:42.247332  507889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:44:42.247740  507889 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:44:42.247987  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetState
	I0116 03:44:42.250254  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .DriverName
	I0116 03:44:42.250570  507889 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0116 03:44:42.250588  507889 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0116 03:44:42.250609  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHHostname
	I0116 03:44:42.251130  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:42.251863  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ea:d5", ip: ""} in network mk-default-k8s-diff-port-434445: {Iface:virbr1 ExpiryTime:2024-01-16 04:44:01 +0000 UTC Type:0 Mac:52:54:00:78:ea:d5 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:default-k8s-diff-port-434445 Clientid:01:52:54:00:78:ea:d5}
	I0116 03:44:42.251896  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined IP address 192.168.50.236 and MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:42.252245  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHPort
	I0116 03:44:42.252473  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHKeyPath
	I0116 03:44:42.252680  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHUsername
	I0116 03:44:42.252842  507889 sshutil.go:53] new ssh client: &{IP:192.168.50.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/default-k8s-diff-port-434445/id_rsa Username:docker}
	I0116 03:44:42.254224  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:42.254837  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ea:d5", ip: ""} in network mk-default-k8s-diff-port-434445: {Iface:virbr1 ExpiryTime:2024-01-16 04:44:01 +0000 UTC Type:0 Mac:52:54:00:78:ea:d5 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:default-k8s-diff-port-434445 Clientid:01:52:54:00:78:ea:d5}
	I0116 03:44:42.254872  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined IP address 192.168.50.236 and MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:42.255050  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHPort
	I0116 03:44:42.255240  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHKeyPath
	I0116 03:44:42.255422  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHUsername
	I0116 03:44:42.255585  507889 sshutil.go:53] new ssh client: &{IP:192.168.50.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/default-k8s-diff-port-434445/id_rsa Username:docker}
	I0116 03:44:42.264367  507889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36555
	I0116 03:44:42.264832  507889 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:44:42.265322  507889 main.go:141] libmachine: Using API Version  1
	I0116 03:44:42.265352  507889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:44:42.265700  507889 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:44:42.266266  507889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:44:42.266306  507889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:44:42.281852  507889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32929
	I0116 03:44:42.282351  507889 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:44:42.282914  507889 main.go:141] libmachine: Using API Version  1
	I0116 03:44:42.282944  507889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:44:42.283363  507889 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:44:42.283599  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetState
	I0116 03:44:42.285584  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .DriverName
	I0116 03:44:42.395709  507889 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0116 03:44:42.398672  507889 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 03:44:42.493544  507889 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0116 03:44:42.531626  507889 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0116 03:44:42.531683  507889 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0116 03:44:42.531717  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHHostname
	I0116 03:44:42.535980  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:42.536575  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ea:d5", ip: ""} in network mk-default-k8s-diff-port-434445: {Iface:virbr1 ExpiryTime:2024-01-16 04:44:01 +0000 UTC Type:0 Mac:52:54:00:78:ea:d5 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:default-k8s-diff-port-434445 Clientid:01:52:54:00:78:ea:d5}
	I0116 03:44:42.536604  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined IP address 192.168.50.236 and MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:42.537018  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHPort
	I0116 03:44:42.537286  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHKeyPath
	I0116 03:44:42.537510  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHUsername
	I0116 03:44:42.537850  507889 sshutil.go:53] new ssh client: &{IP:192.168.50.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/default-k8s-diff-port-434445/id_rsa Username:docker}
	I0116 03:44:42.545910  507889 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.001352094s)
	I0116 03:44:42.545983  507889 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0116 03:44:42.713693  507889 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0116 03:44:42.713718  507889 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0116 03:44:42.752674  507889 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0116 03:44:42.752717  507889 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0116 03:44:42.790178  507889 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 03:44:42.790214  507889 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0116 03:44:42.825256  507889 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 03:44:43.010741  507889 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-434445" context rescaled to 1 replicas
	I0116 03:44:43.010801  507889 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.236 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 03:44:43.014031  507889 out.go:177] * Verifying Kubernetes components...
	I0116 03:44:43.016143  507889 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:44:44.415462  507889 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.921726194s)
	I0116 03:44:44.415532  507889 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.921908068s)
	I0116 03:44:44.415547  507889 main.go:141] libmachine: Making call to close driver server
	I0116 03:44:44.415631  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .Close
	I0116 03:44:44.415579  507889 main.go:141] libmachine: Making call to close driver server
	I0116 03:44:44.415854  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .Close
	I0116 03:44:44.416266  507889 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:44:44.416376  507889 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:44:44.416393  507889 main.go:141] libmachine: Making call to close driver server
	I0116 03:44:44.416424  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .Close
	I0116 03:44:44.416310  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | Closing plugin on server side
	I0116 03:44:44.416310  507889 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:44:44.416595  507889 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:44:44.416658  507889 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:44:44.416671  507889 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:44:44.416977  507889 main.go:141] libmachine: Making call to close driver server
	I0116 03:44:44.417014  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .Close
	I0116 03:44:44.416332  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | Closing plugin on server side
	I0116 03:44:44.417305  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | Closing plugin on server side
	I0116 03:44:44.417358  507889 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:44:44.417375  507889 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:44:44.450870  507889 main.go:141] libmachine: Making call to close driver server
	I0116 03:44:44.450908  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .Close
	I0116 03:44:44.451327  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | Closing plugin on server side
	I0116 03:44:44.451367  507889 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:44:44.451378  507889 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:44:44.496654  507889 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.671338305s)
	I0116 03:44:44.496732  507889 main.go:141] libmachine: Making call to close driver server
	I0116 03:44:44.496744  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .Close
	I0116 03:44:44.496678  507889 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.480503621s)
	I0116 03:44:44.496845  507889 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-434445" to be "Ready" ...
	I0116 03:44:44.497092  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | Closing plugin on server side
	I0116 03:44:44.497088  507889 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:44:44.497166  507889 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:44:44.497188  507889 main.go:141] libmachine: Making call to close driver server
	I0116 03:44:44.497198  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .Close
	I0116 03:44:44.497445  507889 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:44:44.497489  507889 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:44:44.497499  507889 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-434445"
	I0116 03:44:44.497502  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | Closing plugin on server side
	I0116 03:44:44.500234  507889 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0116 03:44:42.355473  507510 retry.go:31] will retry after 6.423018357s: kubelet not initialised
	I0116 03:44:42.543045  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:44.974520  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:41.280410  507257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem --> /usr/share/ca-certificates/4754782.pem (1708 bytes)
	I0116 03:44:41.488388  507257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 03:44:41.515741  507257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478.pem --> /usr/share/ca-certificates/475478.pem (1338 bytes)
	I0116 03:44:41.541744  507257 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0116 03:44:41.564056  507257 ssh_runner.go:195] Run: openssl version
	I0116 03:44:41.571197  507257 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 03:44:41.586430  507257 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:44:41.592334  507257 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 02:35 /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:44:41.592405  507257 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:44:41.599013  507257 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 03:44:41.612793  507257 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/475478.pem && ln -fs /usr/share/ca-certificates/475478.pem /etc/ssl/certs/475478.pem"
	I0116 03:44:41.624554  507257 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/475478.pem
	I0116 03:44:41.629558  507257 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 02:44 /usr/share/ca-certificates/475478.pem
	I0116 03:44:41.629643  507257 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/475478.pem
	I0116 03:44:41.635518  507257 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/475478.pem /etc/ssl/certs/51391683.0"
	I0116 03:44:41.649567  507257 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4754782.pem && ln -fs /usr/share/ca-certificates/4754782.pem /etc/ssl/certs/4754782.pem"
	I0116 03:44:41.662276  507257 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4754782.pem
	I0116 03:44:41.667618  507257 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 02:44 /usr/share/ca-certificates/4754782.pem
	I0116 03:44:41.667699  507257 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4754782.pem
	I0116 03:44:41.678158  507257 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4754782.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 03:44:41.692147  507257 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 03:44:41.698226  507257 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0116 03:44:41.706563  507257 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0116 03:44:41.713387  507257 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0116 03:44:41.721243  507257 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0116 03:44:41.728346  507257 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0116 03:44:41.735446  507257 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0116 03:44:41.743670  507257 kubeadm.go:404] StartCluster: {Name:embed-certs-615980 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-615980 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.159 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 03:44:41.743786  507257 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0116 03:44:41.743860  507257 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 03:44:41.799605  507257 cri.go:89] found id: ""
	I0116 03:44:41.799700  507257 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0116 03:44:41.812356  507257 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0116 03:44:41.812388  507257 kubeadm.go:636] restartCluster start
	I0116 03:44:41.812457  507257 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0116 03:44:41.823906  507257 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:41.825131  507257 kubeconfig.go:92] found "embed-certs-615980" server: "https://192.168.72.159:8443"
	I0116 03:44:41.827484  507257 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0116 03:44:41.838289  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:41.838386  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:41.852927  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:42.338430  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:42.338548  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:42.353029  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:42.838419  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:42.838526  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:42.854254  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:43.338802  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:43.338934  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:43.356427  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:43.839009  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:43.839103  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:43.853265  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:44.338711  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:44.338803  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:44.353364  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:44.838956  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:44.839070  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:44.851711  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:45.339282  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:45.339397  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:45.354275  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:45.838803  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:45.838899  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:45.853557  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:44.501958  507889 addons.go:505] enable addons completed in 2.957229306s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0116 03:44:46.502807  507889 node_ready.go:58] node "default-k8s-diff-port-434445" has status "Ready":"False"
	I0116 03:44:48.786485  507510 retry.go:31] will retry after 18.441149821s: kubelet not initialised
	I0116 03:44:46.975660  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:48.981964  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:46.339198  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:46.339328  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:46.356092  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:46.839356  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:46.839461  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:46.857070  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:47.338405  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:47.338546  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:47.354976  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:47.839369  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:47.839468  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:47.854465  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:48.339102  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:48.339217  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:48.352361  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:48.838853  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:48.838968  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:48.853271  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:49.338643  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:49.338751  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:49.353674  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:49.839214  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:49.839309  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:49.852699  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:50.339060  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:50.339186  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:50.353143  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:50.838646  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:50.838782  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:50.852767  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:48.005726  507889 node_ready.go:49] node "default-k8s-diff-port-434445" has status "Ready":"True"
	I0116 03:44:48.005760  507889 node_ready.go:38] duration metric: took 3.508890685s waiting for node "default-k8s-diff-port-434445" to be "Ready" ...
	I0116 03:44:48.005775  507889 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:44:48.015385  507889 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-f8shl" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:48.027358  507889 pod_ready.go:92] pod "coredns-5dd5756b68-f8shl" in "kube-system" namespace has status "Ready":"True"
	I0116 03:44:48.027383  507889 pod_ready.go:81] duration metric: took 11.966322ms waiting for pod "coredns-5dd5756b68-f8shl" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:48.027397  507889 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-pmx8n" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:48.034156  507889 pod_ready.go:92] pod "coredns-5dd5756b68-pmx8n" in "kube-system" namespace has status "Ready":"True"
	I0116 03:44:48.034179  507889 pod_ready.go:81] duration metric: took 6.775784ms waiting for pod "coredns-5dd5756b68-pmx8n" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:48.034188  507889 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-434445" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:48.039933  507889 pod_ready.go:92] pod "etcd-default-k8s-diff-port-434445" in "kube-system" namespace has status "Ready":"True"
	I0116 03:44:48.039954  507889 pod_ready.go:81] duration metric: took 5.758946ms waiting for pod "etcd-default-k8s-diff-port-434445" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:48.039964  507889 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-434445" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:48.045351  507889 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-434445" in "kube-system" namespace has status "Ready":"True"
	I0116 03:44:48.045376  507889 pod_ready.go:81] duration metric: took 5.405684ms waiting for pod "kube-apiserver-default-k8s-diff-port-434445" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:48.045386  507889 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-434445" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:48.413479  507889 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-434445" in "kube-system" namespace has status "Ready":"True"
	I0116 03:44:48.413508  507889 pod_ready.go:81] duration metric: took 368.114361ms waiting for pod "kube-controller-manager-default-k8s-diff-port-434445" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:48.413522  507889 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dcbqg" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:48.808095  507889 pod_ready.go:92] pod "kube-proxy-dcbqg" in "kube-system" namespace has status "Ready":"True"
	I0116 03:44:48.808132  507889 pod_ready.go:81] duration metric: took 394.600854ms waiting for pod "kube-proxy-dcbqg" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:48.808147  507889 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-434445" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:50.817248  507889 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-434445" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:51.474904  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:53.475529  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:55.475807  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:51.339105  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:51.339225  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:51.352821  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:51.838856  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:51.838985  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:51.852211  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:51.852258  507257 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0116 03:44:51.852271  507257 kubeadm.go:1135] stopping kube-system containers ...
	I0116 03:44:51.852289  507257 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0116 03:44:51.852360  507257 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 03:44:51.897049  507257 cri.go:89] found id: ""
	I0116 03:44:51.897139  507257 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0116 03:44:51.915124  507257 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 03:44:51.926221  507257 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 03:44:51.926311  507257 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 03:44:51.938314  507257 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0116 03:44:51.938358  507257 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:44:52.077173  507257 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:44:52.733999  507257 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:44:52.971172  507257 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:44:53.063705  507257 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:44:53.200256  507257 api_server.go:52] waiting for apiserver process to appear ...
	I0116 03:44:53.200364  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:44:53.701337  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:44:54.201266  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:44:54.700485  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:44:55.200720  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:44:55.701348  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:44:55.725792  507257 api_server.go:72] duration metric: took 2.52553608s to wait for apiserver process to appear ...
	I0116 03:44:55.725826  507257 api_server.go:88] waiting for apiserver healthz status ...
	I0116 03:44:55.725851  507257 api_server.go:253] Checking apiserver healthz at https://192.168.72.159:8443/healthz ...
	I0116 03:44:52.317689  507889 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-434445" in "kube-system" namespace has status "Ready":"True"
	I0116 03:44:52.317718  507889 pod_ready.go:81] duration metric: took 3.509561404s waiting for pod "kube-scheduler-default-k8s-diff-port-434445" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:52.317731  507889 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:54.326412  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:56.327634  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:57.974017  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:59.977499  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:59.850423  507257 api_server.go:279] https://192.168.72.159:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 03:44:59.850456  507257 api_server.go:103] status: https://192.168.72.159:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 03:44:59.850471  507257 api_server.go:253] Checking apiserver healthz at https://192.168.72.159:8443/healthz ...
	I0116 03:44:59.998251  507257 api_server.go:279] https://192.168.72.159:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 03:44:59.998310  507257 api_server.go:103] status: https://192.168.72.159:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 03:45:00.226594  507257 api_server.go:253] Checking apiserver healthz at https://192.168.72.159:8443/healthz ...
	I0116 03:45:00.233826  507257 api_server.go:279] https://192.168.72.159:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 03:45:00.233876  507257 api_server.go:103] status: https://192.168.72.159:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 03:45:00.726919  507257 api_server.go:253] Checking apiserver healthz at https://192.168.72.159:8443/healthz ...
	I0116 03:45:00.732711  507257 api_server.go:279] https://192.168.72.159:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 03:45:00.732748  507257 api_server.go:103] status: https://192.168.72.159:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 03:45:01.226693  507257 api_server.go:253] Checking apiserver healthz at https://192.168.72.159:8443/healthz ...
	I0116 03:45:01.232420  507257 api_server.go:279] https://192.168.72.159:8443/healthz returned 200:
	ok
	I0116 03:45:01.242029  507257 api_server.go:141] control plane version: v1.28.4
	I0116 03:45:01.242078  507257 api_server.go:131] duration metric: took 5.516243097s to wait for apiserver health ...
	I0116 03:45:01.242092  507257 cni.go:84] Creating CNI manager for ""
	I0116 03:45:01.242101  507257 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:45:01.244395  507257 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 03:45:01.246155  507257 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 03:44:58.827760  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:01.327190  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:02.475858  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:04.974991  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:01.270205  507257 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 03:45:01.350402  507257 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 03:45:01.384475  507257 system_pods.go:59] 8 kube-system pods found
	I0116 03:45:01.384536  507257 system_pods.go:61] "coredns-5dd5756b68-ddjkl" [fe342d2a-7d12-4b37-be29-c0d77b920964] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0116 03:45:01.384549  507257 system_pods.go:61] "etcd-embed-certs-615980" [7b7af2e1-b3bb-4c47-862b-838167453939] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0116 03:45:01.384562  507257 system_pods.go:61] "kube-apiserver-embed-certs-615980" [bb883c31-8391-467f-9b4a-affb05a56d49] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0116 03:45:01.384571  507257 system_pods.go:61] "kube-controller-manager-embed-certs-615980" [74f7c5e3-818c-4e15-b693-d4f81308bf9d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0116 03:45:01.384584  507257 system_pods.go:61] "kube-proxy-6jpr7" [e62c9202-8b4f-4fe7-8aa4-b931afd4b028] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0116 03:45:01.384602  507257 system_pods.go:61] "kube-scheduler-embed-certs-615980" [f03d5c9c-af6a-437b-92bb-7c5a46259bbd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0116 03:45:01.384618  507257 system_pods.go:61] "metrics-server-57f55c9bc5-48gnw" [1fcb32b6-f985-428d-8f02-1198d704d8c7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:45:01.384632  507257 system_pods.go:61] "storage-provisioner" [6264adaa-89e8-4f0d-9394-d7325338a2f5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0116 03:45:01.384642  507257 system_pods.go:74] duration metric: took 34.114711ms to wait for pod list to return data ...
	I0116 03:45:01.384656  507257 node_conditions.go:102] verifying NodePressure condition ...
	I0116 03:45:01.392555  507257 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 03:45:01.392597  507257 node_conditions.go:123] node cpu capacity is 2
	I0116 03:45:01.392614  507257 node_conditions.go:105] duration metric: took 7.946538ms to run NodePressure ...
	I0116 03:45:01.392644  507257 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:45:01.788178  507257 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0116 03:45:01.795913  507257 kubeadm.go:787] kubelet initialised
	I0116 03:45:01.795945  507257 kubeadm.go:788] duration metric: took 7.737644ms waiting for restarted kubelet to initialise ...
	I0116 03:45:01.795955  507257 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:45:01.806433  507257 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-ddjkl" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:03.815645  507257 pod_ready.go:102] pod "coredns-5dd5756b68-ddjkl" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:05.821193  507257 pod_ready.go:92] pod "coredns-5dd5756b68-ddjkl" in "kube-system" namespace has status "Ready":"True"
	I0116 03:45:05.821231  507257 pod_ready.go:81] duration metric: took 4.014760393s waiting for pod "coredns-5dd5756b68-ddjkl" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:05.821245  507257 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-615980" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:03.825695  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:05.826742  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:07.234109  507510 kubeadm.go:787] kubelet initialised
	I0116 03:45:07.234137  507510 kubeadm.go:788] duration metric: took 45.657540747s waiting for restarted kubelet to initialise ...
	I0116 03:45:07.234145  507510 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:45:07.239858  507510 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-7q4wc" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:07.247210  507510 pod_ready.go:92] pod "coredns-5644d7b6d9-7q4wc" in "kube-system" namespace has status "Ready":"True"
	I0116 03:45:07.247237  507510 pod_ready.go:81] duration metric: took 7.336988ms waiting for pod "coredns-5644d7b6d9-7q4wc" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:07.247249  507510 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-q2gxq" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:07.252865  507510 pod_ready.go:92] pod "coredns-5644d7b6d9-q2gxq" in "kube-system" namespace has status "Ready":"True"
	I0116 03:45:07.252900  507510 pod_ready.go:81] duration metric: took 5.642204ms waiting for pod "coredns-5644d7b6d9-q2gxq" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:07.252925  507510 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-696770" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:07.259169  507510 pod_ready.go:92] pod "etcd-old-k8s-version-696770" in "kube-system" namespace has status "Ready":"True"
	I0116 03:45:07.259193  507510 pod_ready.go:81] duration metric: took 6.260142ms waiting for pod "etcd-old-k8s-version-696770" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:07.259202  507510 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-696770" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:07.264591  507510 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-696770" in "kube-system" namespace has status "Ready":"True"
	I0116 03:45:07.264622  507510 pod_ready.go:81] duration metric: took 5.411866ms waiting for pod "kube-apiserver-old-k8s-version-696770" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:07.264635  507510 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-696770" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:07.632057  507510 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-696770" in "kube-system" namespace has status "Ready":"True"
	I0116 03:45:07.632093  507510 pod_ready.go:81] duration metric: took 367.447202ms waiting for pod "kube-controller-manager-old-k8s-version-696770" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:07.632110  507510 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-9pfdj" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:08.033002  507510 pod_ready.go:92] pod "kube-proxy-9pfdj" in "kube-system" namespace has status "Ready":"True"
	I0116 03:45:08.033028  507510 pod_ready.go:81] duration metric: took 400.910907ms waiting for pod "kube-proxy-9pfdj" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:08.033038  507510 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-696770" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:08.433134  507510 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-696770" in "kube-system" namespace has status "Ready":"True"
	I0116 03:45:08.433165  507510 pod_ready.go:81] duration metric: took 400.1203ms waiting for pod "kube-scheduler-old-k8s-version-696770" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:08.433180  507510 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:07.485372  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:09.979593  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:07.830703  507257 pod_ready.go:102] pod "etcd-embed-certs-615980" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:10.328466  507257 pod_ready.go:102] pod "etcd-embed-certs-615980" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:08.325925  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:10.825155  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:10.442598  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:12.941713  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:12.478975  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:14.480154  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:12.329199  507257 pod_ready.go:102] pod "etcd-embed-certs-615980" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:13.830177  507257 pod_ready.go:92] pod "etcd-embed-certs-615980" in "kube-system" namespace has status "Ready":"True"
	I0116 03:45:13.830207  507257 pod_ready.go:81] duration metric: took 8.008954008s waiting for pod "etcd-embed-certs-615980" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:13.830217  507257 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-615980" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:13.837420  507257 pod_ready.go:92] pod "kube-apiserver-embed-certs-615980" in "kube-system" namespace has status "Ready":"True"
	I0116 03:45:13.837448  507257 pod_ready.go:81] duration metric: took 7.22323ms waiting for pod "kube-apiserver-embed-certs-615980" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:13.837461  507257 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-615980" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:13.845996  507257 pod_ready.go:92] pod "kube-controller-manager-embed-certs-615980" in "kube-system" namespace has status "Ready":"True"
	I0116 03:45:13.846029  507257 pod_ready.go:81] duration metric: took 8.558317ms waiting for pod "kube-controller-manager-embed-certs-615980" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:13.846040  507257 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-6jpr7" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:13.852645  507257 pod_ready.go:92] pod "kube-proxy-6jpr7" in "kube-system" namespace has status "Ready":"True"
	I0116 03:45:13.852674  507257 pod_ready.go:81] duration metric: took 6.627181ms waiting for pod "kube-proxy-6jpr7" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:13.852683  507257 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-615980" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:13.858818  507257 pod_ready.go:92] pod "kube-scheduler-embed-certs-615980" in "kube-system" namespace has status "Ready":"True"
	I0116 03:45:13.858844  507257 pod_ready.go:81] duration metric: took 6.154319ms waiting for pod "kube-scheduler-embed-certs-615980" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:13.858853  507257 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:15.867133  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:12.826463  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:14.826507  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:14.942079  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:17.442566  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:16.976095  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:19.477899  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:17.868381  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:20.367064  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:17.326184  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:19.328194  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:19.942113  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:21.942853  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:24.441140  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:21.975337  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:24.474400  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:22.368008  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:24.866716  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:21.825428  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:23.825828  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:25.829356  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:26.441756  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:28.443869  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:26.475939  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:28.476308  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:26.866760  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:29.367575  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:28.326756  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:30.825813  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:30.942631  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:33.440480  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:30.975870  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:33.475828  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:31.866401  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:33.867719  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:33.325388  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:35.325485  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:35.939804  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:37.940883  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:35.974504  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:37.975857  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:39.977413  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:36.367513  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:38.865702  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:40.866834  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:37.325804  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:39.326635  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:40.440287  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:42.440838  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:44.441037  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:42.475940  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:44.981122  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:42.867673  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:45.368285  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:41.825982  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:43.826700  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:45.828002  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:46.443286  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:48.941625  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:47.474621  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:49.475149  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:47.867135  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:49.867865  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:48.326035  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:50.327538  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:50.943718  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:53.443986  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:51.977212  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:54.477161  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:52.368444  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:54.375089  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:52.826163  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:55.327160  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:55.940561  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:57.942988  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:56.975470  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:58.975829  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:56.867648  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:59.367479  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:57.826140  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:59.826286  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:00.440963  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:02.941202  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:00.979308  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:03.474099  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:05.478535  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:01.868806  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:04.368227  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:01.826702  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:04.325060  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:06.326882  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:05.441837  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:07.444944  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:07.975344  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:09.975486  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:06.868137  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:09.367752  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:08.329967  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:10.826182  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:09.940745  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:11.942989  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:14.441331  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:11.977171  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:13.977835  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:11.866817  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:13.867951  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:13.327232  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:15.826862  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:16.442525  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:18.442754  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:16.475367  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:18.475903  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:16.367830  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:18.368100  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:20.866302  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:18.326376  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:20.827236  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:20.940998  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:22.941332  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:20.980371  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:23.476451  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:22.868945  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:25.366857  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:23.326576  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:25.826000  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:25.442029  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:27.941061  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:25.974860  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:27.975178  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:29.978092  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:27.370097  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:29.869827  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:28.326735  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:30.826672  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:30.442579  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:32.941784  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:32.475984  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:34.973934  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:31.870772  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:34.367380  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:32.827910  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:34.828185  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:35.440418  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:37.441206  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:39.441254  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:36.974076  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:38.975169  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:36.867231  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:39.366005  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:37.327553  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:39.826218  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:41.941046  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:43.941530  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:40.976023  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:43.478194  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:41.367293  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:43.867097  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:45.867843  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:42.325426  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:44.325723  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:46.326155  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:46.441175  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:48.940677  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:45.974937  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:47.975141  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:50.474687  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:47.868006  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:49.868890  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:48.326634  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:50.326914  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:50.941220  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:53.440868  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:52.475138  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:54.475546  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:52.365917  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:54.366514  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:52.826279  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:55.324177  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:55.441130  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:57.943093  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:56.976380  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:59.478090  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:56.368894  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:58.868051  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:57.326296  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:59.326416  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:01.327894  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:00.440504  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:02.441176  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:04.442171  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:01.975498  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:03.978490  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:01.369736  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:03.871663  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:03.825943  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:05.828215  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:06.443721  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:08.940212  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:06.475354  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:08.975707  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:06.366468  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:08.366998  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:10.368019  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:08.326243  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:10.824873  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:10.942042  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:13.440495  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:11.475551  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:13.475904  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:12.867030  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:14.872409  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:12.826040  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:15.325658  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:15.941844  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:18.440574  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:15.975125  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:17.977326  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:20.474897  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:17.367390  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:19.369090  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:17.325860  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:19.829310  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:20.940407  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:22.941824  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:22.475218  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:24.477773  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:21.866953  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:23.867055  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:22.326660  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:24.327689  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:25.441214  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:27.442253  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:26.975120  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:29.477805  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:26.367295  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:28.867376  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:26.826666  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:29.327606  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:29.940650  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:31.941021  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:34.443144  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:31.978544  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:34.475301  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:31.367770  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:33.867084  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:35.870968  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:31.826565  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:34.326677  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:36.941363  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:38.942121  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:36.974797  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:38.975027  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:38.368025  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:40.866714  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:36.828347  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:39.327130  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:41.441555  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:43.442806  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:40.977172  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:43.476163  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:43.367966  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:45.867460  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:41.826087  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:43.826389  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:46.326497  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:45.941267  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:48.443875  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:45.974452  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:47.977610  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:50.475536  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:48.367053  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:50.368023  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:48.824924  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:50.825835  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:50.941125  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:52.941644  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:52.975726  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:55.476453  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:52.866871  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:55.367951  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:52.826166  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:54.826434  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:55.442084  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:57.442829  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:57.974382  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:59.974448  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:57.867742  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:00.366490  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:57.325608  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:59.825525  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:59.939515  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:01.941648  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:03.942290  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:01.975159  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:03.977002  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:02.366764  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:04.366818  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:01.831740  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:04.326341  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:06.440494  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:08.940336  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:06.475364  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:08.482783  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:06.367160  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:08.867294  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:06.825331  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:08.826594  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:11.324828  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:10.942696  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:13.441805  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:10.974798  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:12.975009  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:14.976154  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:11.366189  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:13.369852  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:15.867536  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:13.327353  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:15.825738  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:15.941304  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:17.942206  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:17.474204  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:19.475630  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:19.974269  507339 pod_ready.go:81] duration metric: took 4m0.007375913s waiting for pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace to be "Ready" ...
	E0116 03:48:19.974299  507339 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0116 03:48:19.974310  507339 pod_ready.go:38] duration metric: took 4m6.26032663s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:48:19.974365  507339 api_server.go:52] waiting for apiserver process to appear ...
	I0116 03:48:19.974431  507339 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0116 03:48:19.974529  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0116 03:48:20.042853  507339 cri.go:89] found id: "de79f87bc28449077936386f603cc5df933f63455bc96cf6ff7e7c6e4bd32bc4"
	I0116 03:48:20.042886  507339 cri.go:89] found id: ""
	I0116 03:48:20.042896  507339 logs.go:284] 1 containers: [de79f87bc28449077936386f603cc5df933f63455bc96cf6ff7e7c6e4bd32bc4]
	I0116 03:48:20.042961  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:20.049795  507339 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0116 03:48:20.049884  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0116 03:48:20.092507  507339 cri.go:89] found id: "01aaf51cd40b9137f2395404bb15b7372927f81dc8a4e884600dc8cba9bbeb8e"
	I0116 03:48:20.092541  507339 cri.go:89] found id: ""
	I0116 03:48:20.092551  507339 logs.go:284] 1 containers: [01aaf51cd40b9137f2395404bb15b7372927f81dc8a4e884600dc8cba9bbeb8e]
	I0116 03:48:20.092619  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:20.097091  507339 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0116 03:48:20.097176  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0116 03:48:20.139182  507339 cri.go:89] found id: "c13ef036a1014a6bbc5c179a0889fb6ff615988713589da58240b7c637adf687"
	I0116 03:48:20.139218  507339 cri.go:89] found id: ""
	I0116 03:48:20.139229  507339 logs.go:284] 1 containers: [c13ef036a1014a6bbc5c179a0889fb6ff615988713589da58240b7c637adf687]
	I0116 03:48:20.139297  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:20.145129  507339 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0116 03:48:20.145210  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0116 03:48:20.191055  507339 cri.go:89] found id: "33381edd7dded1b5ed158738429b4a7f1f03a6171f2af0832bb5a9864c950725"
	I0116 03:48:20.191090  507339 cri.go:89] found id: ""
	I0116 03:48:20.191098  507339 logs.go:284] 1 containers: [33381edd7dded1b5ed158738429b4a7f1f03a6171f2af0832bb5a9864c950725]
	I0116 03:48:20.191161  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:20.195688  507339 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0116 03:48:20.195765  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0116 03:48:20.242718  507339 cri.go:89] found id: "eba2964f029ac61d9cfb59dc548ae05c832f9b23713f51924291fbfc5de985ed"
	I0116 03:48:20.242746  507339 cri.go:89] found id: ""
	I0116 03:48:20.242754  507339 logs.go:284] 1 containers: [eba2964f029ac61d9cfb59dc548ae05c832f9b23713f51924291fbfc5de985ed]
	I0116 03:48:20.242819  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:20.247312  507339 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0116 03:48:20.247399  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0116 03:48:20.287981  507339 cri.go:89] found id: "802d4c55aa04370ff079f09dfe4670f5dac1142ca6db880afa17be75f1d20a76"
	I0116 03:48:20.288009  507339 cri.go:89] found id: ""
	I0116 03:48:20.288018  507339 logs.go:284] 1 containers: [802d4c55aa04370ff079f09dfe4670f5dac1142ca6db880afa17be75f1d20a76]
	I0116 03:48:20.288097  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:20.292370  507339 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0116 03:48:20.292449  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0116 03:48:20.335778  507339 cri.go:89] found id: ""
	I0116 03:48:20.335816  507339 logs.go:284] 0 containers: []
	W0116 03:48:20.335828  507339 logs.go:286] No container was found matching "kindnet"
	I0116 03:48:20.335838  507339 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0116 03:48:20.335906  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0116 03:48:20.381698  507339 cri.go:89] found id: "b7164c1b7732cf6d54642cb9ffc8d9f16696adc0e428c3fa16cf914420d73e1d"
	I0116 03:48:20.381722  507339 cri.go:89] found id: "59754e94eb3cf3fecfedb9077fbad2ecc2618c351fa9bfa3faed0d780fdd9ecb"
	I0116 03:48:20.381727  507339 cri.go:89] found id: ""
	I0116 03:48:20.381734  507339 logs.go:284] 2 containers: [b7164c1b7732cf6d54642cb9ffc8d9f16696adc0e428c3fa16cf914420d73e1d 59754e94eb3cf3fecfedb9077fbad2ecc2618c351fa9bfa3faed0d780fdd9ecb]
	I0116 03:48:20.381790  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:20.386880  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:20.391292  507339 logs.go:123] Gathering logs for describe nodes ...
	I0116 03:48:20.391324  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0116 03:48:20.528154  507339 logs.go:123] Gathering logs for storage-provisioner [b7164c1b7732cf6d54642cb9ffc8d9f16696adc0e428c3fa16cf914420d73e1d] ...
	I0116 03:48:20.528197  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b7164c1b7732cf6d54642cb9ffc8d9f16696adc0e428c3fa16cf914420d73e1d"
	I0116 03:48:20.586645  507339 logs.go:123] Gathering logs for CRI-O ...
	I0116 03:48:20.586680  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0116 03:48:18.367415  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:20.867678  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:18.325849  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:20.326141  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:20.442138  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:22.442180  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:21.096109  507339 logs.go:123] Gathering logs for kubelet ...
	I0116 03:48:21.096155  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0116 03:48:21.154531  507339 logs.go:123] Gathering logs for coredns [c13ef036a1014a6bbc5c179a0889fb6ff615988713589da58240b7c637adf687] ...
	I0116 03:48:21.154577  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c13ef036a1014a6bbc5c179a0889fb6ff615988713589da58240b7c637adf687"
	I0116 03:48:21.203708  507339 logs.go:123] Gathering logs for dmesg ...
	I0116 03:48:21.203760  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0116 03:48:21.219320  507339 logs.go:123] Gathering logs for etcd [01aaf51cd40b9137f2395404bb15b7372927f81dc8a4e884600dc8cba9bbeb8e] ...
	I0116 03:48:21.219362  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 01aaf51cd40b9137f2395404bb15b7372927f81dc8a4e884600dc8cba9bbeb8e"
	I0116 03:48:21.271759  507339 logs.go:123] Gathering logs for kube-proxy [eba2964f029ac61d9cfb59dc548ae05c832f9b23713f51924291fbfc5de985ed] ...
	I0116 03:48:21.271812  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eba2964f029ac61d9cfb59dc548ae05c832f9b23713f51924291fbfc5de985ed"
	I0116 03:48:21.316786  507339 logs.go:123] Gathering logs for kube-controller-manager [802d4c55aa04370ff079f09dfe4670f5dac1142ca6db880afa17be75f1d20a76] ...
	I0116 03:48:21.316825  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 802d4c55aa04370ff079f09dfe4670f5dac1142ca6db880afa17be75f1d20a76"
	I0116 03:48:21.383743  507339 logs.go:123] Gathering logs for storage-provisioner [59754e94eb3cf3fecfedb9077fbad2ecc2618c351fa9bfa3faed0d780fdd9ecb] ...
	I0116 03:48:21.383783  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59754e94eb3cf3fecfedb9077fbad2ecc2618c351fa9bfa3faed0d780fdd9ecb"
	I0116 03:48:21.422893  507339 logs.go:123] Gathering logs for kube-apiserver [de79f87bc28449077936386f603cc5df933f63455bc96cf6ff7e7c6e4bd32bc4] ...
	I0116 03:48:21.422926  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de79f87bc28449077936386f603cc5df933f63455bc96cf6ff7e7c6e4bd32bc4"
	I0116 03:48:21.473295  507339 logs.go:123] Gathering logs for kube-scheduler [33381edd7dded1b5ed158738429b4a7f1f03a6171f2af0832bb5a9864c950725] ...
	I0116 03:48:21.473332  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33381edd7dded1b5ed158738429b4a7f1f03a6171f2af0832bb5a9864c950725"
	I0116 03:48:21.527066  507339 logs.go:123] Gathering logs for container status ...
	I0116 03:48:21.527110  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0116 03:48:24.085743  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:48:24.105359  507339 api_server.go:72] duration metric: took 4m17.107229414s to wait for apiserver process to appear ...
	I0116 03:48:24.105395  507339 api_server.go:88] waiting for apiserver healthz status ...
	I0116 03:48:24.105450  507339 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0116 03:48:24.105567  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0116 03:48:24.154626  507339 cri.go:89] found id: "de79f87bc28449077936386f603cc5df933f63455bc96cf6ff7e7c6e4bd32bc4"
	I0116 03:48:24.154659  507339 cri.go:89] found id: ""
	I0116 03:48:24.154668  507339 logs.go:284] 1 containers: [de79f87bc28449077936386f603cc5df933f63455bc96cf6ff7e7c6e4bd32bc4]
	I0116 03:48:24.154720  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:24.159657  507339 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0116 03:48:24.159735  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0116 03:48:24.202635  507339 cri.go:89] found id: "01aaf51cd40b9137f2395404bb15b7372927f81dc8a4e884600dc8cba9bbeb8e"
	I0116 03:48:24.202663  507339 cri.go:89] found id: ""
	I0116 03:48:24.202671  507339 logs.go:284] 1 containers: [01aaf51cd40b9137f2395404bb15b7372927f81dc8a4e884600dc8cba9bbeb8e]
	I0116 03:48:24.202725  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:24.207503  507339 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0116 03:48:24.207578  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0116 03:48:24.253893  507339 cri.go:89] found id: "c13ef036a1014a6bbc5c179a0889fb6ff615988713589da58240b7c637adf687"
	I0116 03:48:24.253934  507339 cri.go:89] found id: ""
	I0116 03:48:24.253945  507339 logs.go:284] 1 containers: [c13ef036a1014a6bbc5c179a0889fb6ff615988713589da58240b7c637adf687]
	I0116 03:48:24.254016  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:24.258649  507339 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0116 03:48:24.258733  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0116 03:48:24.306636  507339 cri.go:89] found id: "33381edd7dded1b5ed158738429b4a7f1f03a6171f2af0832bb5a9864c950725"
	I0116 03:48:24.306662  507339 cri.go:89] found id: ""
	I0116 03:48:24.306670  507339 logs.go:284] 1 containers: [33381edd7dded1b5ed158738429b4a7f1f03a6171f2af0832bb5a9864c950725]
	I0116 03:48:24.306721  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:24.311270  507339 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0116 03:48:24.311357  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0116 03:48:24.354635  507339 cri.go:89] found id: "eba2964f029ac61d9cfb59dc548ae05c832f9b23713f51924291fbfc5de985ed"
	I0116 03:48:24.354671  507339 cri.go:89] found id: ""
	I0116 03:48:24.354683  507339 logs.go:284] 1 containers: [eba2964f029ac61d9cfb59dc548ae05c832f9b23713f51924291fbfc5de985ed]
	I0116 03:48:24.354756  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:24.359806  507339 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0116 03:48:24.359889  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0116 03:48:24.418188  507339 cri.go:89] found id: "802d4c55aa04370ff079f09dfe4670f5dac1142ca6db880afa17be75f1d20a76"
	I0116 03:48:24.418239  507339 cri.go:89] found id: ""
	I0116 03:48:24.418251  507339 logs.go:284] 1 containers: [802d4c55aa04370ff079f09dfe4670f5dac1142ca6db880afa17be75f1d20a76]
	I0116 03:48:24.418330  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:24.422943  507339 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0116 03:48:24.423030  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0116 03:48:24.467349  507339 cri.go:89] found id: ""
	I0116 03:48:24.467383  507339 logs.go:284] 0 containers: []
	W0116 03:48:24.467394  507339 logs.go:286] No container was found matching "kindnet"
	I0116 03:48:24.467403  507339 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0116 03:48:24.467466  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0116 03:48:24.517490  507339 cri.go:89] found id: "b7164c1b7732cf6d54642cb9ffc8d9f16696adc0e428c3fa16cf914420d73e1d"
	I0116 03:48:24.517525  507339 cri.go:89] found id: "59754e94eb3cf3fecfedb9077fbad2ecc2618c351fa9bfa3faed0d780fdd9ecb"
	I0116 03:48:24.517539  507339 cri.go:89] found id: ""
	I0116 03:48:24.517548  507339 logs.go:284] 2 containers: [b7164c1b7732cf6d54642cb9ffc8d9f16696adc0e428c3fa16cf914420d73e1d 59754e94eb3cf3fecfedb9077fbad2ecc2618c351fa9bfa3faed0d780fdd9ecb]
	I0116 03:48:24.517619  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:24.521952  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:24.526246  507339 logs.go:123] Gathering logs for kube-apiserver [de79f87bc28449077936386f603cc5df933f63455bc96cf6ff7e7c6e4bd32bc4] ...
	I0116 03:48:24.526277  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de79f87bc28449077936386f603cc5df933f63455bc96cf6ff7e7c6e4bd32bc4"
	I0116 03:48:24.583067  507339 logs.go:123] Gathering logs for kube-proxy [eba2964f029ac61d9cfb59dc548ae05c832f9b23713f51924291fbfc5de985ed] ...
	I0116 03:48:24.583108  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eba2964f029ac61d9cfb59dc548ae05c832f9b23713f51924291fbfc5de985ed"
	I0116 03:48:24.631278  507339 logs.go:123] Gathering logs for CRI-O ...
	I0116 03:48:24.631312  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0116 03:48:25.099279  507339 logs.go:123] Gathering logs for describe nodes ...
	I0116 03:48:25.099330  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0116 03:48:25.241388  507339 logs.go:123] Gathering logs for etcd [01aaf51cd40b9137f2395404bb15b7372927f81dc8a4e884600dc8cba9bbeb8e] ...
	I0116 03:48:25.241433  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 01aaf51cd40b9137f2395404bb15b7372927f81dc8a4e884600dc8cba9bbeb8e"
	I0116 03:48:25.298748  507339 logs.go:123] Gathering logs for storage-provisioner [59754e94eb3cf3fecfedb9077fbad2ecc2618c351fa9bfa3faed0d780fdd9ecb] ...
	I0116 03:48:25.298787  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59754e94eb3cf3fecfedb9077fbad2ecc2618c351fa9bfa3faed0d780fdd9ecb"
	I0116 03:48:25.338169  507339 logs.go:123] Gathering logs for kubelet ...
	I0116 03:48:25.338204  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0116 03:48:25.396275  507339 logs.go:123] Gathering logs for coredns [c13ef036a1014a6bbc5c179a0889fb6ff615988713589da58240b7c637adf687] ...
	I0116 03:48:25.396320  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c13ef036a1014a6bbc5c179a0889fb6ff615988713589da58240b7c637adf687"
	I0116 03:48:25.448028  507339 logs.go:123] Gathering logs for storage-provisioner [b7164c1b7732cf6d54642cb9ffc8d9f16696adc0e428c3fa16cf914420d73e1d] ...
	I0116 03:48:25.448087  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b7164c1b7732cf6d54642cb9ffc8d9f16696adc0e428c3fa16cf914420d73e1d"
	I0116 03:48:25.492640  507339 logs.go:123] Gathering logs for container status ...
	I0116 03:48:25.492673  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0116 03:48:25.541478  507339 logs.go:123] Gathering logs for dmesg ...
	I0116 03:48:25.541572  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0116 03:48:25.557537  507339 logs.go:123] Gathering logs for kube-scheduler [33381edd7dded1b5ed158738429b4a7f1f03a6171f2af0832bb5a9864c950725] ...
	I0116 03:48:25.557569  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33381edd7dded1b5ed158738429b4a7f1f03a6171f2af0832bb5a9864c950725"
	I0116 03:48:25.599921  507339 logs.go:123] Gathering logs for kube-controller-manager [802d4c55aa04370ff079f09dfe4670f5dac1142ca6db880afa17be75f1d20a76] ...
	I0116 03:48:25.599956  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 802d4c55aa04370ff079f09dfe4670f5dac1142ca6db880afa17be75f1d20a76"
	I0116 03:48:23.368308  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:25.368495  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:22.825098  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:24.827094  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:24.942708  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:27.441008  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:29.452010  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:28.158281  507339 api_server.go:253] Checking apiserver healthz at https://192.168.39.103:8443/healthz ...
	I0116 03:48:28.165500  507339 api_server.go:279] https://192.168.39.103:8443/healthz returned 200:
	ok
	I0116 03:48:28.166907  507339 api_server.go:141] control plane version: v1.29.0-rc.2
	I0116 03:48:28.166933  507339 api_server.go:131] duration metric: took 4.061531357s to wait for apiserver health ...
	I0116 03:48:28.166943  507339 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 03:48:28.166996  507339 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0116 03:48:28.167056  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0116 03:48:28.209247  507339 cri.go:89] found id: "de79f87bc28449077936386f603cc5df933f63455bc96cf6ff7e7c6e4bd32bc4"
	I0116 03:48:28.209282  507339 cri.go:89] found id: ""
	I0116 03:48:28.209295  507339 logs.go:284] 1 containers: [de79f87bc28449077936386f603cc5df933f63455bc96cf6ff7e7c6e4bd32bc4]
	I0116 03:48:28.209361  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:28.214044  507339 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0116 03:48:28.214126  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0116 03:48:28.263791  507339 cri.go:89] found id: "01aaf51cd40b9137f2395404bb15b7372927f81dc8a4e884600dc8cba9bbeb8e"
	I0116 03:48:28.263817  507339 cri.go:89] found id: ""
	I0116 03:48:28.263825  507339 logs.go:284] 1 containers: [01aaf51cd40b9137f2395404bb15b7372927f81dc8a4e884600dc8cba9bbeb8e]
	I0116 03:48:28.263889  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:28.268803  507339 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0116 03:48:28.268893  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0116 03:48:28.311035  507339 cri.go:89] found id: "c13ef036a1014a6bbc5c179a0889fb6ff615988713589da58240b7c637adf687"
	I0116 03:48:28.311062  507339 cri.go:89] found id: ""
	I0116 03:48:28.311070  507339 logs.go:284] 1 containers: [c13ef036a1014a6bbc5c179a0889fb6ff615988713589da58240b7c637adf687]
	I0116 03:48:28.311132  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:28.315791  507339 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0116 03:48:28.315871  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0116 03:48:28.366917  507339 cri.go:89] found id: "33381edd7dded1b5ed158738429b4a7f1f03a6171f2af0832bb5a9864c950725"
	I0116 03:48:28.366947  507339 cri.go:89] found id: ""
	I0116 03:48:28.366957  507339 logs.go:284] 1 containers: [33381edd7dded1b5ed158738429b4a7f1f03a6171f2af0832bb5a9864c950725]
	I0116 03:48:28.367028  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:28.372648  507339 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0116 03:48:28.372723  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0116 03:48:28.415530  507339 cri.go:89] found id: "eba2964f029ac61d9cfb59dc548ae05c832f9b23713f51924291fbfc5de985ed"
	I0116 03:48:28.415566  507339 cri.go:89] found id: ""
	I0116 03:48:28.415577  507339 logs.go:284] 1 containers: [eba2964f029ac61d9cfb59dc548ae05c832f9b23713f51924291fbfc5de985ed]
	I0116 03:48:28.415669  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:28.420784  507339 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0116 03:48:28.420865  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0116 03:48:28.474238  507339 cri.go:89] found id: "802d4c55aa04370ff079f09dfe4670f5dac1142ca6db880afa17be75f1d20a76"
	I0116 03:48:28.474262  507339 cri.go:89] found id: ""
	I0116 03:48:28.474270  507339 logs.go:284] 1 containers: [802d4c55aa04370ff079f09dfe4670f5dac1142ca6db880afa17be75f1d20a76]
	I0116 03:48:28.474335  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:28.479547  507339 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0116 03:48:28.479637  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0116 03:48:28.526403  507339 cri.go:89] found id: ""
	I0116 03:48:28.526436  507339 logs.go:284] 0 containers: []
	W0116 03:48:28.526455  507339 logs.go:286] No container was found matching "kindnet"
	I0116 03:48:28.526466  507339 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0116 03:48:28.526535  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0116 03:48:28.572958  507339 cri.go:89] found id: "b7164c1b7732cf6d54642cb9ffc8d9f16696adc0e428c3fa16cf914420d73e1d"
	I0116 03:48:28.572988  507339 cri.go:89] found id: "59754e94eb3cf3fecfedb9077fbad2ecc2618c351fa9bfa3faed0d780fdd9ecb"
	I0116 03:48:28.572994  507339 cri.go:89] found id: ""
	I0116 03:48:28.573002  507339 logs.go:284] 2 containers: [b7164c1b7732cf6d54642cb9ffc8d9f16696adc0e428c3fa16cf914420d73e1d 59754e94eb3cf3fecfedb9077fbad2ecc2618c351fa9bfa3faed0d780fdd9ecb]
	I0116 03:48:28.573064  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:28.579388  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:28.585318  507339 logs.go:123] Gathering logs for kube-apiserver [de79f87bc28449077936386f603cc5df933f63455bc96cf6ff7e7c6e4bd32bc4] ...
	I0116 03:48:28.585355  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de79f87bc28449077936386f603cc5df933f63455bc96cf6ff7e7c6e4bd32bc4"
	I0116 03:48:28.640376  507339 logs.go:123] Gathering logs for kube-controller-manager [802d4c55aa04370ff079f09dfe4670f5dac1142ca6db880afa17be75f1d20a76] ...
	I0116 03:48:28.640419  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 802d4c55aa04370ff079f09dfe4670f5dac1142ca6db880afa17be75f1d20a76"
	I0116 03:48:28.701292  507339 logs.go:123] Gathering logs for storage-provisioner [59754e94eb3cf3fecfedb9077fbad2ecc2618c351fa9bfa3faed0d780fdd9ecb] ...
	I0116 03:48:28.701332  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59754e94eb3cf3fecfedb9077fbad2ecc2618c351fa9bfa3faed0d780fdd9ecb"
	I0116 03:48:28.744571  507339 logs.go:123] Gathering logs for container status ...
	I0116 03:48:28.744605  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0116 03:48:28.794905  507339 logs.go:123] Gathering logs for kubelet ...
	I0116 03:48:28.794942  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0116 03:48:28.847687  507339 logs.go:123] Gathering logs for dmesg ...
	I0116 03:48:28.847736  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0116 03:48:28.861641  507339 logs.go:123] Gathering logs for describe nodes ...
	I0116 03:48:28.861690  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0116 03:48:29.036673  507339 logs.go:123] Gathering logs for kube-proxy [eba2964f029ac61d9cfb59dc548ae05c832f9b23713f51924291fbfc5de985ed] ...
	I0116 03:48:29.036709  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eba2964f029ac61d9cfb59dc548ae05c832f9b23713f51924291fbfc5de985ed"
	I0116 03:48:29.084792  507339 logs.go:123] Gathering logs for CRI-O ...
	I0116 03:48:29.084823  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0116 03:48:29.449656  507339 logs.go:123] Gathering logs for etcd [01aaf51cd40b9137f2395404bb15b7372927f81dc8a4e884600dc8cba9bbeb8e] ...
	I0116 03:48:29.449707  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 01aaf51cd40b9137f2395404bb15b7372927f81dc8a4e884600dc8cba9bbeb8e"
	I0116 03:48:29.502412  507339 logs.go:123] Gathering logs for storage-provisioner [b7164c1b7732cf6d54642cb9ffc8d9f16696adc0e428c3fa16cf914420d73e1d] ...
	I0116 03:48:29.502460  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b7164c1b7732cf6d54642cb9ffc8d9f16696adc0e428c3fa16cf914420d73e1d"
	I0116 03:48:29.546471  507339 logs.go:123] Gathering logs for coredns [c13ef036a1014a6bbc5c179a0889fb6ff615988713589da58240b7c637adf687] ...
	I0116 03:48:29.546520  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c13ef036a1014a6bbc5c179a0889fb6ff615988713589da58240b7c637adf687"
	I0116 03:48:29.594282  507339 logs.go:123] Gathering logs for kube-scheduler [33381edd7dded1b5ed158738429b4a7f1f03a6171f2af0832bb5a9864c950725] ...
	I0116 03:48:29.594329  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33381edd7dded1b5ed158738429b4a7f1f03a6171f2af0832bb5a9864c950725"
	I0116 03:48:27.867485  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:29.868504  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:27.324987  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:29.325330  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:31.329373  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:32.146165  507339 system_pods.go:59] 8 kube-system pods found
	I0116 03:48:32.146209  507339 system_pods.go:61] "coredns-76f75df574-lr95b" [15dc0b11-f7ec-4729-bbfa-79b9649fbad6] Running
	I0116 03:48:32.146218  507339 system_pods.go:61] "etcd-no-preload-666547" [f98dcc57-5869-4a35-b443-2e6c9beddd88] Running
	I0116 03:48:32.146225  507339 system_pods.go:61] "kube-apiserver-no-preload-666547" [3aceae9d-224a-4e8c-a02d-ad4199d8d558] Running
	I0116 03:48:32.146232  507339 system_pods.go:61] "kube-controller-manager-no-preload-666547" [af39c89c-3b41-4d6f-b075-16de04a3ecc0] Running
	I0116 03:48:32.146238  507339 system_pods.go:61] "kube-proxy-dcmrn" [1e91c96f-cbc5-424d-a09e-06e34bf7a2e2] Running
	I0116 03:48:32.146244  507339 system_pods.go:61] "kube-scheduler-no-preload-666547" [0f2aa713-aebc-4858-a348-32169021235e] Running
	I0116 03:48:32.146253  507339 system_pods.go:61] "metrics-server-57f55c9bc5-78vfj" [dbd2d3b2-ec0f-4253-8549-7c4299522c37] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:48:32.146261  507339 system_pods.go:61] "storage-provisioner" [f4e1ba45-217d-41d5-b583-2f60044879bc] Running
	I0116 03:48:32.146272  507339 system_pods.go:74] duration metric: took 3.979321091s to wait for pod list to return data ...
	I0116 03:48:32.146286  507339 default_sa.go:34] waiting for default service account to be created ...
	I0116 03:48:32.149674  507339 default_sa.go:45] found service account: "default"
	I0116 03:48:32.149702  507339 default_sa.go:55] duration metric: took 3.408362ms for default service account to be created ...
	I0116 03:48:32.149710  507339 system_pods.go:116] waiting for k8s-apps to be running ...
	I0116 03:48:32.160459  507339 system_pods.go:86] 8 kube-system pods found
	I0116 03:48:32.160495  507339 system_pods.go:89] "coredns-76f75df574-lr95b" [15dc0b11-f7ec-4729-bbfa-79b9649fbad6] Running
	I0116 03:48:32.160503  507339 system_pods.go:89] "etcd-no-preload-666547" [f98dcc57-5869-4a35-b443-2e6c9beddd88] Running
	I0116 03:48:32.160510  507339 system_pods.go:89] "kube-apiserver-no-preload-666547" [3aceae9d-224a-4e8c-a02d-ad4199d8d558] Running
	I0116 03:48:32.160518  507339 system_pods.go:89] "kube-controller-manager-no-preload-666547" [af39c89c-3b41-4d6f-b075-16de04a3ecc0] Running
	I0116 03:48:32.160524  507339 system_pods.go:89] "kube-proxy-dcmrn" [1e91c96f-cbc5-424d-a09e-06e34bf7a2e2] Running
	I0116 03:48:32.160529  507339 system_pods.go:89] "kube-scheduler-no-preload-666547" [0f2aa713-aebc-4858-a348-32169021235e] Running
	I0116 03:48:32.160540  507339 system_pods.go:89] "metrics-server-57f55c9bc5-78vfj" [dbd2d3b2-ec0f-4253-8549-7c4299522c37] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:48:32.160548  507339 system_pods.go:89] "storage-provisioner" [f4e1ba45-217d-41d5-b583-2f60044879bc] Running
	I0116 03:48:32.160560  507339 system_pods.go:126] duration metric: took 10.843124ms to wait for k8s-apps to be running ...
	I0116 03:48:32.160569  507339 system_svc.go:44] waiting for kubelet service to be running ....
	I0116 03:48:32.160629  507339 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:48:32.179349  507339 system_svc.go:56] duration metric: took 18.767357ms WaitForService to wait for kubelet.
	I0116 03:48:32.179391  507339 kubeadm.go:581] duration metric: took 4m25.181271548s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0116 03:48:32.179426  507339 node_conditions.go:102] verifying NodePressure condition ...
	I0116 03:48:32.185135  507339 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 03:48:32.185165  507339 node_conditions.go:123] node cpu capacity is 2
	I0116 03:48:32.185198  507339 node_conditions.go:105] duration metric: took 5.766084ms to run NodePressure ...
	I0116 03:48:32.185219  507339 start.go:228] waiting for startup goroutines ...
	I0116 03:48:32.185228  507339 start.go:233] waiting for cluster config update ...
	I0116 03:48:32.185269  507339 start.go:242] writing updated cluster config ...
	I0116 03:48:32.185860  507339 ssh_runner.go:195] Run: rm -f paused
	I0116 03:48:32.243812  507339 start.go:600] kubectl: 1.29.0, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0116 03:48:32.246056  507339 out.go:177] * Done! kubectl is now configured to use "no-preload-666547" cluster and "default" namespace by default
	I0116 03:48:31.940664  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:33.941163  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:31.868778  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:34.367292  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:33.825761  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:35.829060  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:36.440459  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:38.440778  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:36.367672  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:38.867024  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:40.867193  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:38.325077  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:40.326947  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:40.440990  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:42.942197  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:43.365931  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:45.367057  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:42.826200  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:44.827292  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:45.441601  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:47.443035  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:47.367959  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:49.867083  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:47.326224  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:49.326339  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:49.940592  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:51.942424  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:54.440478  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:51.868254  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:54.368867  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:51.825317  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:52.325756  507889 pod_ready.go:81] duration metric: took 4m0.008011182s waiting for pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace to be "Ready" ...
	E0116 03:48:52.325782  507889 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0116 03:48:52.325790  507889 pod_ready.go:38] duration metric: took 4m4.320002841s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:48:52.325804  507889 api_server.go:52] waiting for apiserver process to appear ...
	I0116 03:48:52.325855  507889 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0116 03:48:52.325905  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0116 03:48:52.394600  507889 cri.go:89] found id: "f9861ff0fbab72660648bfcb49b82810bada99f73a55cb0aa359ba114825b8f3"
	I0116 03:48:52.394624  507889 cri.go:89] found id: ""
	I0116 03:48:52.394632  507889 logs.go:284] 1 containers: [f9861ff0fbab72660648bfcb49b82810bada99f73a55cb0aa359ba114825b8f3]
	I0116 03:48:52.394716  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:48:52.400137  507889 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0116 03:48:52.400232  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0116 03:48:52.444453  507889 cri.go:89] found id: "e2758ac4468b10765b35629ffc54228655bc18f79a654cc03e91929ebdc3cf90"
	I0116 03:48:52.444485  507889 cri.go:89] found id: ""
	I0116 03:48:52.444495  507889 logs.go:284] 1 containers: [e2758ac4468b10765b35629ffc54228655bc18f79a654cc03e91929ebdc3cf90]
	I0116 03:48:52.444557  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:48:52.449850  507889 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0116 03:48:52.450002  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0116 03:48:52.499160  507889 cri.go:89] found id: "a07ae23e6e9e3c589264d0be7095896549a4b71fa2d0b06805a3b1b3bea16f6a"
	I0116 03:48:52.499204  507889 cri.go:89] found id: ""
	I0116 03:48:52.499216  507889 logs.go:284] 1 containers: [a07ae23e6e9e3c589264d0be7095896549a4b71fa2d0b06805a3b1b3bea16f6a]
	I0116 03:48:52.499286  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:48:52.504257  507889 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0116 03:48:52.504357  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0116 03:48:52.563747  507889 cri.go:89] found id: "e60387e0e2800d67c99eb3566f31ad675c0db6c22379fc28ee9a447af5f5023c"
	I0116 03:48:52.563782  507889 cri.go:89] found id: ""
	I0116 03:48:52.563790  507889 logs.go:284] 1 containers: [e60387e0e2800d67c99eb3566f31ad675c0db6c22379fc28ee9a447af5f5023c]
	I0116 03:48:52.563860  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:48:52.568676  507889 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0116 03:48:52.568771  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0116 03:48:52.617090  507889 cri.go:89] found id: "44f71a7069827d97c7326cbd36638ec36ff39cbbe0a1e4e61e7c010a38d2e123"
	I0116 03:48:52.617136  507889 cri.go:89] found id: ""
	I0116 03:48:52.617149  507889 logs.go:284] 1 containers: [44f71a7069827d97c7326cbd36638ec36ff39cbbe0a1e4e61e7c010a38d2e123]
	I0116 03:48:52.617222  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:48:52.622121  507889 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0116 03:48:52.622224  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0116 03:48:52.685004  507889 cri.go:89] found id: "1438a3832328a03b94c3d408a2e332c0fb0adab915a9d4f26994e84c0509fac9"
	I0116 03:48:52.685033  507889 cri.go:89] found id: ""
	I0116 03:48:52.685043  507889 logs.go:284] 1 containers: [1438a3832328a03b94c3d408a2e332c0fb0adab915a9d4f26994e84c0509fac9]
	I0116 03:48:52.685113  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:48:52.689837  507889 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0116 03:48:52.689913  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0116 03:48:52.730008  507889 cri.go:89] found id: ""
	I0116 03:48:52.730034  507889 logs.go:284] 0 containers: []
	W0116 03:48:52.730044  507889 logs.go:286] No container was found matching "kindnet"
	I0116 03:48:52.730051  507889 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0116 03:48:52.730120  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0116 03:48:52.780523  507889 cri.go:89] found id: "33ba3a03d878abc86a194defd715bb61a0066e49063e3823002bd3ec421da138"
	I0116 03:48:52.780554  507889 cri.go:89] found id: "a4b27881ef90cea325e8f306fac88a76052eec15a693c291288664b8f4ebcc2a"
	I0116 03:48:52.780562  507889 cri.go:89] found id: ""
	I0116 03:48:52.780571  507889 logs.go:284] 2 containers: [33ba3a03d878abc86a194defd715bb61a0066e49063e3823002bd3ec421da138 a4b27881ef90cea325e8f306fac88a76052eec15a693c291288664b8f4ebcc2a]
	I0116 03:48:52.780641  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:48:52.787305  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:48:52.791352  507889 logs.go:123] Gathering logs for kubelet ...
	I0116 03:48:52.791383  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0116 03:48:52.859099  507889 logs.go:123] Gathering logs for etcd [e2758ac4468b10765b35629ffc54228655bc18f79a654cc03e91929ebdc3cf90] ...
	I0116 03:48:52.859152  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2758ac4468b10765b35629ffc54228655bc18f79a654cc03e91929ebdc3cf90"
	I0116 03:48:52.912806  507889 logs.go:123] Gathering logs for coredns [a07ae23e6e9e3c589264d0be7095896549a4b71fa2d0b06805a3b1b3bea16f6a] ...
	I0116 03:48:52.912852  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a07ae23e6e9e3c589264d0be7095896549a4b71fa2d0b06805a3b1b3bea16f6a"
	I0116 03:48:52.960880  507889 logs.go:123] Gathering logs for kube-controller-manager [1438a3832328a03b94c3d408a2e332c0fb0adab915a9d4f26994e84c0509fac9] ...
	I0116 03:48:52.960919  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1438a3832328a03b94c3d408a2e332c0fb0adab915a9d4f26994e84c0509fac9"
	I0116 03:48:53.023064  507889 logs.go:123] Gathering logs for CRI-O ...
	I0116 03:48:53.023110  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0116 03:48:53.524890  507889 logs.go:123] Gathering logs for kube-apiserver [f9861ff0fbab72660648bfcb49b82810bada99f73a55cb0aa359ba114825b8f3] ...
	I0116 03:48:53.524934  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9861ff0fbab72660648bfcb49b82810bada99f73a55cb0aa359ba114825b8f3"
	I0116 03:48:53.587550  507889 logs.go:123] Gathering logs for kube-scheduler [e60387e0e2800d67c99eb3566f31ad675c0db6c22379fc28ee9a447af5f5023c] ...
	I0116 03:48:53.587594  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e60387e0e2800d67c99eb3566f31ad675c0db6c22379fc28ee9a447af5f5023c"
	I0116 03:48:53.627986  507889 logs.go:123] Gathering logs for kube-proxy [44f71a7069827d97c7326cbd36638ec36ff39cbbe0a1e4e61e7c010a38d2e123] ...
	I0116 03:48:53.628029  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44f71a7069827d97c7326cbd36638ec36ff39cbbe0a1e4e61e7c010a38d2e123"
	I0116 03:48:53.671704  507889 logs.go:123] Gathering logs for dmesg ...
	I0116 03:48:53.671739  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0116 03:48:53.686333  507889 logs.go:123] Gathering logs for describe nodes ...
	I0116 03:48:53.686370  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0116 03:48:53.855391  507889 logs.go:123] Gathering logs for storage-provisioner [33ba3a03d878abc86a194defd715bb61a0066e49063e3823002bd3ec421da138] ...
	I0116 03:48:53.855435  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33ba3a03d878abc86a194defd715bb61a0066e49063e3823002bd3ec421da138"
	I0116 03:48:53.906028  507889 logs.go:123] Gathering logs for storage-provisioner [a4b27881ef90cea325e8f306fac88a76052eec15a693c291288664b8f4ebcc2a] ...
	I0116 03:48:53.906064  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4b27881ef90cea325e8f306fac88a76052eec15a693c291288664b8f4ebcc2a"
	I0116 03:48:53.945386  507889 logs.go:123] Gathering logs for container status ...
	I0116 03:48:53.945419  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0116 03:48:56.498685  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:48:56.516768  507889 api_server.go:72] duration metric: took 4m13.505914609s to wait for apiserver process to appear ...
	I0116 03:48:56.516797  507889 api_server.go:88] waiting for apiserver healthz status ...
	I0116 03:48:56.516836  507889 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0116 03:48:56.516907  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0116 03:48:56.563236  507889 cri.go:89] found id: "f9861ff0fbab72660648bfcb49b82810bada99f73a55cb0aa359ba114825b8f3"
	I0116 03:48:56.563272  507889 cri.go:89] found id: ""
	I0116 03:48:56.563283  507889 logs.go:284] 1 containers: [f9861ff0fbab72660648bfcb49b82810bada99f73a55cb0aa359ba114825b8f3]
	I0116 03:48:56.563356  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:48:56.568012  507889 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0116 03:48:56.568188  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0116 03:48:56.443226  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:58.940353  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:56.868597  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:59.366906  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:56.613095  507889 cri.go:89] found id: "e2758ac4468b10765b35629ffc54228655bc18f79a654cc03e91929ebdc3cf90"
	I0116 03:48:56.613120  507889 cri.go:89] found id: ""
	I0116 03:48:56.613129  507889 logs.go:284] 1 containers: [e2758ac4468b10765b35629ffc54228655bc18f79a654cc03e91929ebdc3cf90]
	I0116 03:48:56.613190  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:48:56.618736  507889 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0116 03:48:56.618827  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0116 03:48:56.672773  507889 cri.go:89] found id: "a07ae23e6e9e3c589264d0be7095896549a4b71fa2d0b06805a3b1b3bea16f6a"
	I0116 03:48:56.672796  507889 cri.go:89] found id: ""
	I0116 03:48:56.672805  507889 logs.go:284] 1 containers: [a07ae23e6e9e3c589264d0be7095896549a4b71fa2d0b06805a3b1b3bea16f6a]
	I0116 03:48:56.672855  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:48:56.679218  507889 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0116 03:48:56.679293  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0116 03:48:56.724517  507889 cri.go:89] found id: "e60387e0e2800d67c99eb3566f31ad675c0db6c22379fc28ee9a447af5f5023c"
	I0116 03:48:56.724547  507889 cri.go:89] found id: ""
	I0116 03:48:56.724555  507889 logs.go:284] 1 containers: [e60387e0e2800d67c99eb3566f31ad675c0db6c22379fc28ee9a447af5f5023c]
	I0116 03:48:56.724622  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:48:56.730061  507889 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0116 03:48:56.730146  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0116 03:48:56.775380  507889 cri.go:89] found id: "44f71a7069827d97c7326cbd36638ec36ff39cbbe0a1e4e61e7c010a38d2e123"
	I0116 03:48:56.775413  507889 cri.go:89] found id: ""
	I0116 03:48:56.775423  507889 logs.go:284] 1 containers: [44f71a7069827d97c7326cbd36638ec36ff39cbbe0a1e4e61e7c010a38d2e123]
	I0116 03:48:56.775494  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:48:56.781085  507889 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0116 03:48:56.781183  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0116 03:48:56.830030  507889 cri.go:89] found id: "1438a3832328a03b94c3d408a2e332c0fb0adab915a9d4f26994e84c0509fac9"
	I0116 03:48:56.830067  507889 cri.go:89] found id: ""
	I0116 03:48:56.830076  507889 logs.go:284] 1 containers: [1438a3832328a03b94c3d408a2e332c0fb0adab915a9d4f26994e84c0509fac9]
	I0116 03:48:56.830163  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:48:56.834956  507889 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0116 03:48:56.835035  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0116 03:48:56.882972  507889 cri.go:89] found id: ""
	I0116 03:48:56.883001  507889 logs.go:284] 0 containers: []
	W0116 03:48:56.883013  507889 logs.go:286] No container was found matching "kindnet"
	I0116 03:48:56.883022  507889 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0116 03:48:56.883095  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0116 03:48:56.925520  507889 cri.go:89] found id: "33ba3a03d878abc86a194defd715bb61a0066e49063e3823002bd3ec421da138"
	I0116 03:48:56.925553  507889 cri.go:89] found id: "a4b27881ef90cea325e8f306fac88a76052eec15a693c291288664b8f4ebcc2a"
	I0116 03:48:56.925560  507889 cri.go:89] found id: ""
	I0116 03:48:56.925574  507889 logs.go:284] 2 containers: [33ba3a03d878abc86a194defd715bb61a0066e49063e3823002bd3ec421da138 a4b27881ef90cea325e8f306fac88a76052eec15a693c291288664b8f4ebcc2a]
	I0116 03:48:56.925656  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:48:56.931331  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:48:56.936492  507889 logs.go:123] Gathering logs for storage-provisioner [a4b27881ef90cea325e8f306fac88a76052eec15a693c291288664b8f4ebcc2a] ...
	I0116 03:48:56.936527  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4b27881ef90cea325e8f306fac88a76052eec15a693c291288664b8f4ebcc2a"
	I0116 03:48:56.981819  507889 logs.go:123] Gathering logs for kubelet ...
	I0116 03:48:56.981851  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0116 03:48:57.045678  507889 logs.go:123] Gathering logs for dmesg ...
	I0116 03:48:57.045723  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0116 03:48:57.060832  507889 logs.go:123] Gathering logs for etcd [e2758ac4468b10765b35629ffc54228655bc18f79a654cc03e91929ebdc3cf90] ...
	I0116 03:48:57.060872  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2758ac4468b10765b35629ffc54228655bc18f79a654cc03e91929ebdc3cf90"
	I0116 03:48:57.123644  507889 logs.go:123] Gathering logs for kube-scheduler [e60387e0e2800d67c99eb3566f31ad675c0db6c22379fc28ee9a447af5f5023c] ...
	I0116 03:48:57.123695  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e60387e0e2800d67c99eb3566f31ad675c0db6c22379fc28ee9a447af5f5023c"
	I0116 03:48:57.170173  507889 logs.go:123] Gathering logs for kube-proxy [44f71a7069827d97c7326cbd36638ec36ff39cbbe0a1e4e61e7c010a38d2e123] ...
	I0116 03:48:57.170216  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44f71a7069827d97c7326cbd36638ec36ff39cbbe0a1e4e61e7c010a38d2e123"
	I0116 03:48:57.215434  507889 logs.go:123] Gathering logs for describe nodes ...
	I0116 03:48:57.215470  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0116 03:48:57.370036  507889 logs.go:123] Gathering logs for kube-controller-manager [1438a3832328a03b94c3d408a2e332c0fb0adab915a9d4f26994e84c0509fac9] ...
	I0116 03:48:57.370081  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1438a3832328a03b94c3d408a2e332c0fb0adab915a9d4f26994e84c0509fac9"
	I0116 03:48:57.432988  507889 logs.go:123] Gathering logs for container status ...
	I0116 03:48:57.433048  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0116 03:48:57.485239  507889 logs.go:123] Gathering logs for kube-apiserver [f9861ff0fbab72660648bfcb49b82810bada99f73a55cb0aa359ba114825b8f3] ...
	I0116 03:48:57.485284  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9861ff0fbab72660648bfcb49b82810bada99f73a55cb0aa359ba114825b8f3"
	I0116 03:48:57.547192  507889 logs.go:123] Gathering logs for coredns [a07ae23e6e9e3c589264d0be7095896549a4b71fa2d0b06805a3b1b3bea16f6a] ...
	I0116 03:48:57.547237  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a07ae23e6e9e3c589264d0be7095896549a4b71fa2d0b06805a3b1b3bea16f6a"
	I0116 03:48:57.598025  507889 logs.go:123] Gathering logs for storage-provisioner [33ba3a03d878abc86a194defd715bb61a0066e49063e3823002bd3ec421da138] ...
	I0116 03:48:57.598085  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33ba3a03d878abc86a194defd715bb61a0066e49063e3823002bd3ec421da138"
	I0116 03:48:57.644234  507889 logs.go:123] Gathering logs for CRI-O ...
	I0116 03:48:57.644271  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0116 03:49:00.562219  507889 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8444/healthz ...
	I0116 03:49:00.568196  507889 api_server.go:279] https://192.168.50.236:8444/healthz returned 200:
	ok
	I0116 03:49:00.571612  507889 api_server.go:141] control plane version: v1.28.4
	I0116 03:49:00.571655  507889 api_server.go:131] duration metric: took 4.0548511s to wait for apiserver health ...
	I0116 03:49:00.571668  507889 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 03:49:00.571701  507889 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0116 03:49:00.571774  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0116 03:49:00.623308  507889 cri.go:89] found id: "f9861ff0fbab72660648bfcb49b82810bada99f73a55cb0aa359ba114825b8f3"
	I0116 03:49:00.623344  507889 cri.go:89] found id: ""
	I0116 03:49:00.623355  507889 logs.go:284] 1 containers: [f9861ff0fbab72660648bfcb49b82810bada99f73a55cb0aa359ba114825b8f3]
	I0116 03:49:00.623418  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:49:00.630287  507889 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0116 03:49:00.630381  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0116 03:49:00.673225  507889 cri.go:89] found id: "e2758ac4468b10765b35629ffc54228655bc18f79a654cc03e91929ebdc3cf90"
	I0116 03:49:00.673265  507889 cri.go:89] found id: ""
	I0116 03:49:00.673276  507889 logs.go:284] 1 containers: [e2758ac4468b10765b35629ffc54228655bc18f79a654cc03e91929ebdc3cf90]
	I0116 03:49:00.673334  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:49:00.678677  507889 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0116 03:49:00.678768  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0116 03:49:00.723055  507889 cri.go:89] found id: "a07ae23e6e9e3c589264d0be7095896549a4b71fa2d0b06805a3b1b3bea16f6a"
	I0116 03:49:00.723081  507889 cri.go:89] found id: ""
	I0116 03:49:00.723089  507889 logs.go:284] 1 containers: [a07ae23e6e9e3c589264d0be7095896549a4b71fa2d0b06805a3b1b3bea16f6a]
	I0116 03:49:00.723148  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:49:00.727931  507889 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0116 03:49:00.728053  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0116 03:49:00.777602  507889 cri.go:89] found id: "e60387e0e2800d67c99eb3566f31ad675c0db6c22379fc28ee9a447af5f5023c"
	I0116 03:49:00.777639  507889 cri.go:89] found id: ""
	I0116 03:49:00.777651  507889 logs.go:284] 1 containers: [e60387e0e2800d67c99eb3566f31ad675c0db6c22379fc28ee9a447af5f5023c]
	I0116 03:49:00.777723  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:49:00.787121  507889 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0116 03:49:00.787206  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0116 03:49:00.835268  507889 cri.go:89] found id: "44f71a7069827d97c7326cbd36638ec36ff39cbbe0a1e4e61e7c010a38d2e123"
	I0116 03:49:00.835300  507889 cri.go:89] found id: ""
	I0116 03:49:00.835310  507889 logs.go:284] 1 containers: [44f71a7069827d97c7326cbd36638ec36ff39cbbe0a1e4e61e7c010a38d2e123]
	I0116 03:49:00.835378  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:49:00.842204  507889 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0116 03:49:00.842299  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0116 03:49:00.889511  507889 cri.go:89] found id: "1438a3832328a03b94c3d408a2e332c0fb0adab915a9d4f26994e84c0509fac9"
	I0116 03:49:00.889541  507889 cri.go:89] found id: ""
	I0116 03:49:00.889551  507889 logs.go:284] 1 containers: [1438a3832328a03b94c3d408a2e332c0fb0adab915a9d4f26994e84c0509fac9]
	I0116 03:49:00.889620  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:49:00.894964  507889 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0116 03:49:00.895059  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0116 03:49:00.937187  507889 cri.go:89] found id: ""
	I0116 03:49:00.937221  507889 logs.go:284] 0 containers: []
	W0116 03:49:00.937237  507889 logs.go:286] No container was found matching "kindnet"
	I0116 03:49:00.937246  507889 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0116 03:49:00.937313  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0116 03:49:00.977711  507889 cri.go:89] found id: "33ba3a03d878abc86a194defd715bb61a0066e49063e3823002bd3ec421da138"
	I0116 03:49:00.977740  507889 cri.go:89] found id: "a4b27881ef90cea325e8f306fac88a76052eec15a693c291288664b8f4ebcc2a"
	I0116 03:49:00.977748  507889 cri.go:89] found id: ""
	I0116 03:49:00.977756  507889 logs.go:284] 2 containers: [33ba3a03d878abc86a194defd715bb61a0066e49063e3823002bd3ec421da138 a4b27881ef90cea325e8f306fac88a76052eec15a693c291288664b8f4ebcc2a]
	I0116 03:49:00.977834  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:49:00.982886  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:49:00.988008  507889 logs.go:123] Gathering logs for describe nodes ...
	I0116 03:49:00.988061  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0116 03:49:01.115755  507889 logs.go:123] Gathering logs for dmesg ...
	I0116 03:49:01.115791  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0116 03:49:01.131706  507889 logs.go:123] Gathering logs for etcd [e2758ac4468b10765b35629ffc54228655bc18f79a654cc03e91929ebdc3cf90] ...
	I0116 03:49:01.131748  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2758ac4468b10765b35629ffc54228655bc18f79a654cc03e91929ebdc3cf90"
	I0116 03:49:01.186279  507889 logs.go:123] Gathering logs for kube-scheduler [e60387e0e2800d67c99eb3566f31ad675c0db6c22379fc28ee9a447af5f5023c] ...
	I0116 03:49:01.186324  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e60387e0e2800d67c99eb3566f31ad675c0db6c22379fc28ee9a447af5f5023c"
	I0116 03:49:01.231057  507889 logs.go:123] Gathering logs for kube-controller-manager [1438a3832328a03b94c3d408a2e332c0fb0adab915a9d4f26994e84c0509fac9] ...
	I0116 03:49:01.231100  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1438a3832328a03b94c3d408a2e332c0fb0adab915a9d4f26994e84c0509fac9"
	I0116 03:49:01.307541  507889 logs.go:123] Gathering logs for storage-provisioner [33ba3a03d878abc86a194defd715bb61a0066e49063e3823002bd3ec421da138] ...
	I0116 03:49:01.307586  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33ba3a03d878abc86a194defd715bb61a0066e49063e3823002bd3ec421da138"
	I0116 03:49:01.356517  507889 logs.go:123] Gathering logs for storage-provisioner [a4b27881ef90cea325e8f306fac88a76052eec15a693c291288664b8f4ebcc2a] ...
	I0116 03:49:01.356563  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4b27881ef90cea325e8f306fac88a76052eec15a693c291288664b8f4ebcc2a"
	I0116 03:49:01.409790  507889 logs.go:123] Gathering logs for kube-apiserver [f9861ff0fbab72660648bfcb49b82810bada99f73a55cb0aa359ba114825b8f3] ...
	I0116 03:49:01.409846  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9861ff0fbab72660648bfcb49b82810bada99f73a55cb0aa359ba114825b8f3"
	I0116 03:49:01.462029  507889 logs.go:123] Gathering logs for CRI-O ...
	I0116 03:49:01.462077  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0116 03:49:00.942100  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:49:02.942316  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:49:01.838933  507889 logs.go:123] Gathering logs for coredns [a07ae23e6e9e3c589264d0be7095896549a4b71fa2d0b06805a3b1b3bea16f6a] ...
	I0116 03:49:01.838999  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a07ae23e6e9e3c589264d0be7095896549a4b71fa2d0b06805a3b1b3bea16f6a"
	I0116 03:49:01.884022  507889 logs.go:123] Gathering logs for kube-proxy [44f71a7069827d97c7326cbd36638ec36ff39cbbe0a1e4e61e7c010a38d2e123] ...
	I0116 03:49:01.884075  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44f71a7069827d97c7326cbd36638ec36ff39cbbe0a1e4e61e7c010a38d2e123"
	I0116 03:49:01.930032  507889 logs.go:123] Gathering logs for container status ...
	I0116 03:49:01.930090  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0116 03:49:01.998827  507889 logs.go:123] Gathering logs for kubelet ...
	I0116 03:49:01.998863  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0116 03:49:04.573529  507889 system_pods.go:59] 8 kube-system pods found
	I0116 03:49:04.573571  507889 system_pods.go:61] "coredns-5dd5756b68-pmx8n" [9d3c9941-8938-4d4a-b6d8-9b542ca6f1ca] Running
	I0116 03:49:04.573579  507889 system_pods.go:61] "etcd-default-k8s-diff-port-434445" [a2327159-d034-4f62-a8f5-14062a41c75d] Running
	I0116 03:49:04.573587  507889 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-434445" [75c5fc5e-86b1-42cf-bb5c-8a1f8b4e13db] Running
	I0116 03:49:04.573594  507889 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-434445" [2568dd4f-124d-4c4f-8651-3b89b2b54983] Running
	I0116 03:49:04.573600  507889 system_pods.go:61] "kube-proxy-dcbqg" [eba1f9bf-6aa7-40cd-b57c-745c2d0cc414] Running
	I0116 03:49:04.573607  507889 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-434445" [479952a1-8c3d-49b4-bd22-fef988e6b83d] Running
	I0116 03:49:04.573617  507889 system_pods.go:61] "metrics-server-57f55c9bc5-894n2" [46e4892a-d026-4a9d-88bc-128e92848808] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:49:04.573626  507889 system_pods.go:61] "storage-provisioner" [16fd4585-3d75-40c3-a28d-4134375f4e3d] Running
	I0116 03:49:04.573638  507889 system_pods.go:74] duration metric: took 4.001961367s to wait for pod list to return data ...
	I0116 03:49:04.573657  507889 default_sa.go:34] waiting for default service account to be created ...
	I0116 03:49:04.577012  507889 default_sa.go:45] found service account: "default"
	I0116 03:49:04.577041  507889 default_sa.go:55] duration metric: took 3.376395ms for default service account to be created ...
	I0116 03:49:04.577051  507889 system_pods.go:116] waiting for k8s-apps to be running ...
	I0116 03:49:04.583833  507889 system_pods.go:86] 8 kube-system pods found
	I0116 03:49:04.583880  507889 system_pods.go:89] "coredns-5dd5756b68-pmx8n" [9d3c9941-8938-4d4a-b6d8-9b542ca6f1ca] Running
	I0116 03:49:04.583890  507889 system_pods.go:89] "etcd-default-k8s-diff-port-434445" [a2327159-d034-4f62-a8f5-14062a41c75d] Running
	I0116 03:49:04.583898  507889 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-434445" [75c5fc5e-86b1-42cf-bb5c-8a1f8b4e13db] Running
	I0116 03:49:04.583905  507889 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-434445" [2568dd4f-124d-4c4f-8651-3b89b2b54983] Running
	I0116 03:49:04.583911  507889 system_pods.go:89] "kube-proxy-dcbqg" [eba1f9bf-6aa7-40cd-b57c-745c2d0cc414] Running
	I0116 03:49:04.583918  507889 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-434445" [479952a1-8c3d-49b4-bd22-fef988e6b83d] Running
	I0116 03:49:04.583928  507889 system_pods.go:89] "metrics-server-57f55c9bc5-894n2" [46e4892a-d026-4a9d-88bc-128e92848808] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:49:04.583936  507889 system_pods.go:89] "storage-provisioner" [16fd4585-3d75-40c3-a28d-4134375f4e3d] Running
	I0116 03:49:04.583950  507889 system_pods.go:126] duration metric: took 6.89136ms to wait for k8s-apps to be running ...
	I0116 03:49:04.583964  507889 system_svc.go:44] waiting for kubelet service to be running ....
	I0116 03:49:04.584016  507889 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:49:04.600209  507889 system_svc.go:56] duration metric: took 16.229333ms WaitForService to wait for kubelet.
	I0116 03:49:04.600252  507889 kubeadm.go:581] duration metric: took 4m21.589410808s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0116 03:49:04.600285  507889 node_conditions.go:102] verifying NodePressure condition ...
	I0116 03:49:04.603774  507889 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 03:49:04.603803  507889 node_conditions.go:123] node cpu capacity is 2
	I0116 03:49:04.603815  507889 node_conditions.go:105] duration metric: took 3.52526ms to run NodePressure ...
	I0116 03:49:04.603829  507889 start.go:228] waiting for startup goroutines ...
	I0116 03:49:04.603836  507889 start.go:233] waiting for cluster config update ...
	I0116 03:49:04.603849  507889 start.go:242] writing updated cluster config ...
	I0116 03:49:04.604185  507889 ssh_runner.go:195] Run: rm -f paused
	I0116 03:49:04.658922  507889 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0116 03:49:04.661265  507889 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-434445" cluster and "default" namespace by default
	I0116 03:49:01.367935  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:49:03.867391  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:49:05.867519  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:49:05.440602  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:49:07.441041  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:49:08.434235  507510 pod_ready.go:81] duration metric: took 4m0.001038173s waiting for pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace to be "Ready" ...
	E0116 03:49:08.434278  507510 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace to be "Ready" (will not retry!)
	I0116 03:49:08.434304  507510 pod_ready.go:38] duration metric: took 4m1.20014772s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:49:08.434338  507510 kubeadm.go:640] restartCluster took 5m11.767236835s
	W0116 03:49:08.434423  507510 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0116 03:49:08.434463  507510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0116 03:49:07.868307  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:49:10.367347  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:49:15.339252  507510 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (6.904753674s)
	I0116 03:49:15.339341  507510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:49:15.355684  507510 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 03:49:15.371377  507510 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 03:49:15.393609  507510 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 03:49:15.393674  507510 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0116 03:49:15.478382  507510 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0116 03:49:15.478464  507510 kubeadm.go:322] [preflight] Running pre-flight checks
	I0116 03:49:15.663487  507510 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0116 03:49:15.663663  507510 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0116 03:49:15.663803  507510 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0116 03:49:15.940677  507510 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0116 03:49:15.940857  507510 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0116 03:49:15.949553  507510 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0116 03:49:16.075111  507510 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0116 03:49:12.867512  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:49:13.859320  507257 pod_ready.go:81] duration metric: took 4m0.000451049s waiting for pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace to be "Ready" ...
	E0116 03:49:13.859353  507257 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace to be "Ready" (will not retry!)
	I0116 03:49:13.859375  507257 pod_ready.go:38] duration metric: took 4m12.063407854s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:49:13.859418  507257 kubeadm.go:640] restartCluster took 4m32.047022773s
	W0116 03:49:13.859484  507257 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0116 03:49:13.859513  507257 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0116 03:49:16.077099  507510 out.go:204]   - Generating certificates and keys ...
	I0116 03:49:16.077224  507510 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0116 03:49:16.077305  507510 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0116 03:49:16.077410  507510 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0116 03:49:16.077504  507510 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0116 03:49:16.077617  507510 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0116 03:49:16.077745  507510 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0116 03:49:16.078085  507510 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0116 03:49:16.078639  507510 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0116 03:49:16.079112  507510 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0116 03:49:16.079719  507510 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0116 03:49:16.079935  507510 kubeadm.go:322] [certs] Using the existing "sa" key
	I0116 03:49:16.080015  507510 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0116 03:49:16.246902  507510 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0116 03:49:16.332722  507510 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0116 03:49:16.534277  507510 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0116 03:49:16.908642  507510 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0116 03:49:16.909711  507510 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0116 03:49:16.911960  507510 out.go:204]   - Booting up control plane ...
	I0116 03:49:16.912103  507510 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0116 03:49:16.923200  507510 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0116 03:49:16.924797  507510 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0116 03:49:16.926738  507510 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0116 03:49:16.937544  507510 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0116 03:49:27.943253  507510 kubeadm.go:322] [apiclient] All control plane components are healthy after 11.005405 seconds
	I0116 03:49:27.943474  507510 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0116 03:49:27.970644  507510 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I0116 03:49:28.500660  507510 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0116 03:49:28.500847  507510 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-696770 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0116 03:49:29.015036  507510 kubeadm.go:322] [bootstrap-token] Using token: nr2yh0.22ni19zxk2s7hw9l
	I0116 03:49:28.504409  507257 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.644866985s)
	I0116 03:49:28.504498  507257 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:49:28.519788  507257 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 03:49:28.531667  507257 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 03:49:28.543058  507257 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 03:49:28.543113  507257 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0116 03:49:28.603369  507257 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0116 03:49:28.603521  507257 kubeadm.go:322] [preflight] Running pre-flight checks
	I0116 03:49:28.784258  507257 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0116 03:49:28.784384  507257 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0116 03:49:28.784491  507257 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0116 03:49:29.068390  507257 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0116 03:49:29.017077  507510 out.go:204]   - Configuring RBAC rules ...
	I0116 03:49:29.017276  507510 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0116 03:49:29.044200  507510 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0116 03:49:29.049807  507510 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0116 03:49:29.054441  507510 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0116 03:49:29.057939  507510 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0116 03:49:29.142810  507510 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0116 03:49:29.439580  507510 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0116 03:49:29.441665  507510 kubeadm.go:322] 
	I0116 03:49:29.441736  507510 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0116 03:49:29.441741  507510 kubeadm.go:322] 
	I0116 03:49:29.441863  507510 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0116 03:49:29.441898  507510 kubeadm.go:322] 
	I0116 03:49:29.441932  507510 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0116 03:49:29.441999  507510 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0116 03:49:29.442057  507510 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0116 03:49:29.442099  507510 kubeadm.go:322] 
	I0116 03:49:29.442200  507510 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0116 03:49:29.442306  507510 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0116 03:49:29.442414  507510 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0116 03:49:29.442429  507510 kubeadm.go:322] 
	I0116 03:49:29.442566  507510 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I0116 03:49:29.442689  507510 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0116 03:49:29.442701  507510 kubeadm.go:322] 
	I0116 03:49:29.442813  507510 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token nr2yh0.22ni19zxk2s7hw9l \
	I0116 03:49:29.442967  507510 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:ab75761e1119a1709a216b682cd7b1dcce0641115df5f236b54b16e4f66aa044 \
	I0116 03:49:29.443008  507510 kubeadm.go:322]     --control-plane 	  
	I0116 03:49:29.443024  507510 kubeadm.go:322] 
	I0116 03:49:29.443147  507510 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0116 03:49:29.443159  507510 kubeadm.go:322] 
	I0116 03:49:29.443285  507510 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token nr2yh0.22ni19zxk2s7hw9l \
	I0116 03:49:29.443414  507510 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:ab75761e1119a1709a216b682cd7b1dcce0641115df5f236b54b16e4f66aa044 
	I0116 03:49:29.444142  507510 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0116 03:49:29.444278  507510 cni.go:84] Creating CNI manager for ""
	I0116 03:49:29.444302  507510 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:49:29.446569  507510 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 03:49:29.447957  507510 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 03:49:29.457418  507510 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 03:49:29.478015  507510 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0116 03:49:29.478130  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:29.478135  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=6e8fa5f64d0e7272be43ff25ed3826261f0a2578 minikube.k8s.io/name=old-k8s-version-696770 minikube.k8s.io/updated_at=2024_01_16T03_49_29_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:29.070681  507257 out.go:204]   - Generating certificates and keys ...
	I0116 03:49:29.070805  507257 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0116 03:49:29.070882  507257 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0116 03:49:29.071007  507257 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0116 03:49:29.071108  507257 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0116 03:49:29.071243  507257 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0116 03:49:29.071320  507257 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0116 03:49:29.071422  507257 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0116 03:49:29.071497  507257 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0116 03:49:29.071928  507257 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0116 03:49:29.074454  507257 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0116 03:49:29.076202  507257 kubeadm.go:322] [certs] Using the existing "sa" key
	I0116 03:49:29.076435  507257 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0116 03:49:29.360527  507257 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0116 03:49:29.779361  507257 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0116 03:49:29.976749  507257 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0116 03:49:30.075605  507257 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0116 03:49:30.076375  507257 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0116 03:49:30.079235  507257 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0116 03:49:30.081497  507257 out.go:204]   - Booting up control plane ...
	I0116 03:49:30.081645  507257 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0116 03:49:30.082340  507257 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0116 03:49:30.083349  507257 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0116 03:49:30.103660  507257 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0116 03:49:30.104863  507257 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0116 03:49:30.104924  507257 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0116 03:49:30.229980  507257 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0116 03:49:29.724417  507510 ops.go:34] apiserver oom_adj: -16
	I0116 03:49:29.724549  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:30.224988  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:30.725451  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:31.225287  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:31.724689  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:32.224984  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:32.724769  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:33.225547  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:33.724874  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:34.225301  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:34.725134  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:35.224977  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:35.724998  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:36.225495  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:36.725043  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:37.224700  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:37.725397  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:38.225311  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:38.725308  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:39.224885  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:38.732431  507257 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.502537 seconds
	I0116 03:49:38.732591  507257 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0116 03:49:38.766319  507257 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0116 03:49:39.312926  507257 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0116 03:49:39.313225  507257 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-615980 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0116 03:49:39.836927  507257 kubeadm.go:322] [bootstrap-token] Using token: 8bzdm1.4lwyoxck7xjn6vqr
	I0116 03:49:39.838931  507257 out.go:204]   - Configuring RBAC rules ...
	I0116 03:49:39.839093  507257 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0116 03:49:39.850909  507257 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0116 03:49:39.873417  507257 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0116 03:49:39.879093  507257 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0116 03:49:39.883914  507257 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0116 03:49:39.889130  507257 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0116 03:49:39.910444  507257 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0116 03:49:40.235572  507257 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0116 03:49:40.334951  507257 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0116 03:49:40.335000  507257 kubeadm.go:322] 
	I0116 03:49:40.335092  507257 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0116 03:49:40.335103  507257 kubeadm.go:322] 
	I0116 03:49:40.335212  507257 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0116 03:49:40.335222  507257 kubeadm.go:322] 
	I0116 03:49:40.335266  507257 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0116 03:49:40.335353  507257 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0116 03:49:40.335421  507257 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0116 03:49:40.335430  507257 kubeadm.go:322] 
	I0116 03:49:40.335504  507257 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0116 03:49:40.335513  507257 kubeadm.go:322] 
	I0116 03:49:40.335598  507257 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0116 03:49:40.335618  507257 kubeadm.go:322] 
	I0116 03:49:40.335690  507257 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0116 03:49:40.335793  507257 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0116 03:49:40.335891  507257 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0116 03:49:40.335904  507257 kubeadm.go:322] 
	I0116 03:49:40.336008  507257 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0116 03:49:40.336128  507257 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0116 03:49:40.336143  507257 kubeadm.go:322] 
	I0116 03:49:40.336262  507257 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 8bzdm1.4lwyoxck7xjn6vqr \
	I0116 03:49:40.336427  507257 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ab75761e1119a1709a216b682cd7b1dcce0641115df5f236b54b16e4f66aa044 \
	I0116 03:49:40.336456  507257 kubeadm.go:322] 	--control-plane 
	I0116 03:49:40.336463  507257 kubeadm.go:322] 
	I0116 03:49:40.336594  507257 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0116 03:49:40.336611  507257 kubeadm.go:322] 
	I0116 03:49:40.336744  507257 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 8bzdm1.4lwyoxck7xjn6vqr \
	I0116 03:49:40.336876  507257 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ab75761e1119a1709a216b682cd7b1dcce0641115df5f236b54b16e4f66aa044 
	I0116 03:49:40.337377  507257 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0116 03:49:40.337421  507257 cni.go:84] Creating CNI manager for ""
	I0116 03:49:40.337432  507257 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:49:40.340415  507257 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 03:49:40.341952  507257 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 03:49:40.376620  507257 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 03:49:40.459091  507257 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0116 03:49:40.459177  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:40.459233  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=6e8fa5f64d0e7272be43ff25ed3826261f0a2578 minikube.k8s.io/name=embed-certs-615980 minikube.k8s.io/updated_at=2024_01_16T03_49_40_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:40.524693  507257 ops.go:34] apiserver oom_adj: -16
	I0116 03:49:40.917890  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:39.725272  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:40.225380  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:40.725272  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:41.225258  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:41.725525  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:42.225270  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:42.725463  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:43.224674  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:43.724904  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:44.224946  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:44.725197  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:44.843354  507510 kubeadm.go:1088] duration metric: took 15.365308355s to wait for elevateKubeSystemPrivileges.
	I0116 03:49:44.843465  507510 kubeadm.go:406] StartCluster complete in 5m48.250275121s
	I0116 03:49:44.843545  507510 settings.go:142] acquiring lock: {Name:mk3ded99c06d285b174e76622bb0741c8dd2d8fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:49:44.843708  507510 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17965-468241/kubeconfig
	I0116 03:49:44.846444  507510 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-468241/kubeconfig: {Name:mkac3c63bbcba9b4fa1bea3480797757d703fb9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:49:44.846814  507510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0116 03:49:44.846959  507510 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0116 03:49:44.847043  507510 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-696770"
	I0116 03:49:44.847067  507510 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-696770"
	I0116 03:49:44.847065  507510 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-696770"
	W0116 03:49:44.847076  507510 addons.go:243] addon storage-provisioner should already be in state true
	I0116 03:49:44.847079  507510 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-696770"
	I0116 03:49:44.847099  507510 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-696770"
	I0116 03:49:44.847108  507510 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-696770"
	W0116 03:49:44.847130  507510 addons.go:243] addon metrics-server should already be in state true
	I0116 03:49:44.847152  507510 host.go:66] Checking if "old-k8s-version-696770" exists ...
	I0116 03:49:44.847087  507510 config.go:182] Loaded profile config "old-k8s-version-696770": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0116 03:49:44.847178  507510 host.go:66] Checking if "old-k8s-version-696770" exists ...
	I0116 03:49:44.847548  507510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:49:44.847568  507510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:49:44.847579  507510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:49:44.847594  507510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:49:44.847605  507510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:49:44.847632  507510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:49:44.865585  507510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36045
	I0116 03:49:44.865597  507510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45289
	I0116 03:49:44.865592  507510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39579
	I0116 03:49:44.866119  507510 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:49:44.866200  507510 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:49:44.866352  507510 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:49:44.867018  507510 main.go:141] libmachine: Using API Version  1
	I0116 03:49:44.867040  507510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:49:44.867043  507510 main.go:141] libmachine: Using API Version  1
	I0116 03:49:44.867051  507510 main.go:141] libmachine: Using API Version  1
	I0116 03:49:44.867071  507510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:49:44.867091  507510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:49:44.867481  507510 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:49:44.867557  507510 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:49:44.867711  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetState
	I0116 03:49:44.867929  507510 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:49:44.868184  507510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:49:44.868215  507510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:49:44.868486  507510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:49:44.868519  507510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:49:44.872747  507510 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-696770"
	W0116 03:49:44.872781  507510 addons.go:243] addon default-storageclass should already be in state true
	I0116 03:49:44.872816  507510 host.go:66] Checking if "old-k8s-version-696770" exists ...
	I0116 03:49:44.873264  507510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:49:44.873308  507510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:49:44.888049  507510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45943
	I0116 03:49:44.890481  507510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37259
	I0116 03:49:44.890990  507510 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:49:44.891285  507510 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:49:44.891567  507510 main.go:141] libmachine: Using API Version  1
	I0116 03:49:44.891582  507510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:49:44.891846  507510 main.go:141] libmachine: Using API Version  1
	I0116 03:49:44.891865  507510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:49:44.892307  507510 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:49:44.892510  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetState
	I0116 03:49:44.892575  507510 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:49:44.892760  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetState
	I0116 03:49:44.894812  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .DriverName
	I0116 03:49:44.895060  507510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37701
	I0116 03:49:44.896571  507510 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0116 03:49:44.895272  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .DriverName
	I0116 03:49:44.895678  507510 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:49:44.898051  507510 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0116 03:49:44.898074  507510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0116 03:49:44.899552  507510 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:49:44.897299  507510 main.go:141] libmachine: Using API Version  1
	I0116 03:49:44.898096  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHHostname
	I0116 03:49:44.901091  507510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:49:44.901216  507510 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 03:49:44.901234  507510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0116 03:49:44.901256  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHHostname
	I0116 03:49:44.902226  507510 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:49:44.902866  507510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:49:44.902908  507510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:49:44.905915  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:49:44.906022  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:49:44.906456  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:20:1a", ip: ""} in network mk-old-k8s-version-696770: {Iface:virbr3 ExpiryTime:2024-01-16 04:43:38 +0000 UTC Type:0 Mac:52:54:00:37:20:1a Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:old-k8s-version-696770 Clientid:01:52:54:00:37:20:1a}
	I0116 03:49:44.906482  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined IP address 192.168.61.167 and MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:49:44.906775  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHPort
	I0116 03:49:44.906851  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:20:1a", ip: ""} in network mk-old-k8s-version-696770: {Iface:virbr3 ExpiryTime:2024-01-16 04:43:38 +0000 UTC Type:0 Mac:52:54:00:37:20:1a Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:old-k8s-version-696770 Clientid:01:52:54:00:37:20:1a}
	I0116 03:49:44.906892  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHPort
	I0116 03:49:44.906941  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined IP address 192.168.61.167 and MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:49:44.907116  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHKeyPath
	I0116 03:49:44.907254  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHKeyPath
	I0116 03:49:44.907324  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHUsername
	I0116 03:49:44.907416  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHUsername
	I0116 03:49:44.907471  507510 sshutil.go:53] new ssh client: &{IP:192.168.61.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/old-k8s-version-696770/id_rsa Username:docker}
	I0116 03:49:44.908078  507510 sshutil.go:53] new ssh client: &{IP:192.168.61.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/old-k8s-version-696770/id_rsa Username:docker}
	I0116 03:49:44.925689  507510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38987
	I0116 03:49:44.926190  507510 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:49:44.926847  507510 main.go:141] libmachine: Using API Version  1
	I0116 03:49:44.926870  507510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:49:44.927322  507510 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:49:44.927545  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetState
	I0116 03:49:44.929553  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .DriverName
	I0116 03:49:44.930008  507510 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0116 03:49:44.930027  507510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0116 03:49:44.930049  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHHostname
	I0116 03:49:44.933353  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:49:44.933768  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:20:1a", ip: ""} in network mk-old-k8s-version-696770: {Iface:virbr3 ExpiryTime:2024-01-16 04:43:38 +0000 UTC Type:0 Mac:52:54:00:37:20:1a Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:old-k8s-version-696770 Clientid:01:52:54:00:37:20:1a}
	I0116 03:49:44.933799  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined IP address 192.168.61.167 and MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:49:44.933975  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHPort
	I0116 03:49:44.934184  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHKeyPath
	I0116 03:49:44.934277  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHUsername
	I0116 03:49:44.934374  507510 sshutil.go:53] new ssh client: &{IP:192.168.61.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/old-k8s-version-696770/id_rsa Username:docker}
	I0116 03:49:45.044743  507510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 03:49:45.073179  507510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0116 03:49:45.073426  507510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0116 03:49:45.095360  507510 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0116 03:49:45.095383  507510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0116 03:49:45.162632  507510 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0116 03:49:45.162661  507510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0116 03:49:45.252628  507510 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 03:49:45.252665  507510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0116 03:49:45.325535  507510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 03:49:45.533499  507510 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-696770" context rescaled to 1 replicas
	I0116 03:49:45.533553  507510 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.167 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 03:49:45.536655  507510 out.go:177] * Verifying Kubernetes components...
	I0116 03:49:41.418664  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:41.918459  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:42.418296  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:42.918119  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:43.418565  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:43.918746  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:44.418812  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:44.918603  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:45.418865  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:45.918104  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:45.538565  507510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:49:46.390448  507510 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.3456663s)
	I0116 03:49:46.390513  507510 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.31729292s)
	I0116 03:49:46.390536  507510 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.317072847s)
	I0116 03:49:46.390556  507510 main.go:141] libmachine: Making call to close driver server
	I0116 03:49:46.390520  507510 main.go:141] libmachine: Making call to close driver server
	I0116 03:49:46.390573  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .Close
	I0116 03:49:46.390595  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .Close
	I0116 03:49:46.390559  507510 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0116 03:49:46.391000  507510 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:49:46.391023  507510 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:49:46.391035  507510 main.go:141] libmachine: Making call to close driver server
	I0116 03:49:46.391040  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | Closing plugin on server side
	I0116 03:49:46.391006  507510 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:49:46.391059  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | Closing plugin on server side
	I0116 03:49:46.391062  507510 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:49:46.391044  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .Close
	I0116 03:49:46.391075  507510 main.go:141] libmachine: Making call to close driver server
	I0116 03:49:46.391083  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .Close
	I0116 03:49:46.391314  507510 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:49:46.391332  507510 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:49:46.391594  507510 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:49:46.391625  507510 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:49:46.465666  507510 main.go:141] libmachine: Making call to close driver server
	I0116 03:49:46.465688  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .Close
	I0116 03:49:46.466107  507510 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:49:46.466127  507510 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:49:46.597926  507510 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.05930194s)
	I0116 03:49:46.597988  507510 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-696770" to be "Ready" ...
	I0116 03:49:46.597925  507510 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.272324444s)
	I0116 03:49:46.598099  507510 main.go:141] libmachine: Making call to close driver server
	I0116 03:49:46.598123  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .Close
	I0116 03:49:46.598503  507510 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:49:46.598527  507510 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:49:46.598531  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | Closing plugin on server side
	I0116 03:49:46.598539  507510 main.go:141] libmachine: Making call to close driver server
	I0116 03:49:46.598549  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .Close
	I0116 03:49:46.598884  507510 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:49:46.598903  507510 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:49:46.598917  507510 addons.go:470] Verifying addon metrics-server=true in "old-k8s-version-696770"
	I0116 03:49:46.600845  507510 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0116 03:49:46.602484  507510 addons.go:505] enable addons completed in 1.755527621s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0116 03:49:46.612929  507510 node_ready.go:49] node "old-k8s-version-696770" has status "Ready":"True"
	I0116 03:49:46.612962  507510 node_ready.go:38] duration metric: took 14.959317ms waiting for node "old-k8s-version-696770" to be "Ready" ...
	I0116 03:49:46.612975  507510 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:49:46.616466  507510 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-h85tj" in "kube-system" namespace to be "Ready" ...
	I0116 03:49:48.628130  507510 pod_ready.go:102] pod "coredns-5644d7b6d9-h85tj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:49:46.418268  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:46.917976  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:47.418645  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:47.917927  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:48.417920  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:48.917939  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:49.418387  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:49.918203  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:50.417930  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:50.918518  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:51.418036  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:51.917981  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:52.418293  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:52.635961  507257 kubeadm.go:1088] duration metric: took 12.176857981s to wait for elevateKubeSystemPrivileges.
	I0116 03:49:52.636014  507257 kubeadm.go:406] StartCluster complete in 5m10.892359223s
	I0116 03:49:52.636054  507257 settings.go:142] acquiring lock: {Name:mk3ded99c06d285b174e76622bb0741c8dd2d8fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:49:52.636186  507257 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17965-468241/kubeconfig
	I0116 03:49:52.638885  507257 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-468241/kubeconfig: {Name:mkac3c63bbcba9b4fa1bea3480797757d703fb9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:49:52.639229  507257 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0116 03:49:52.639345  507257 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0116 03:49:52.639439  507257 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-615980"
	I0116 03:49:52.639461  507257 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-615980"
	I0116 03:49:52.639458  507257 addons.go:69] Setting default-storageclass=true in profile "embed-certs-615980"
	W0116 03:49:52.639469  507257 addons.go:243] addon storage-provisioner should already be in state true
	I0116 03:49:52.639482  507257 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-615980"
	I0116 03:49:52.639504  507257 config.go:182] Loaded profile config "embed-certs-615980": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 03:49:52.639541  507257 host.go:66] Checking if "embed-certs-615980" exists ...
	I0116 03:49:52.639562  507257 addons.go:69] Setting metrics-server=true in profile "embed-certs-615980"
	I0116 03:49:52.639579  507257 addons.go:234] Setting addon metrics-server=true in "embed-certs-615980"
	W0116 03:49:52.639591  507257 addons.go:243] addon metrics-server should already be in state true
	I0116 03:49:52.639639  507257 host.go:66] Checking if "embed-certs-615980" exists ...
	I0116 03:49:52.639965  507257 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:49:52.639984  507257 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:49:52.640007  507257 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:49:52.640023  507257 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:49:52.640084  507257 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:49:52.640118  507257 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:49:52.660468  507257 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36595
	I0116 03:49:52.660653  507257 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44367
	I0116 03:49:52.661058  507257 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:49:52.661184  507257 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:49:52.661685  507257 main.go:141] libmachine: Using API Version  1
	I0116 03:49:52.661709  507257 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:49:52.661768  507257 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40717
	I0116 03:49:52.661855  507257 main.go:141] libmachine: Using API Version  1
	I0116 03:49:52.661871  507257 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:49:52.662141  507257 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:49:52.662207  507257 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:49:52.662425  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetState
	I0116 03:49:52.662480  507257 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:49:52.662858  507257 main.go:141] libmachine: Using API Version  1
	I0116 03:49:52.662875  507257 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:49:52.663301  507257 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:49:52.663337  507257 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:49:52.663413  507257 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:49:52.663956  507257 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:49:52.663985  507257 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:49:52.666163  507257 addons.go:234] Setting addon default-storageclass=true in "embed-certs-615980"
	W0116 03:49:52.666190  507257 addons.go:243] addon default-storageclass should already be in state true
	I0116 03:49:52.666224  507257 host.go:66] Checking if "embed-certs-615980" exists ...
	I0116 03:49:52.666630  507257 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:49:52.666672  507257 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:49:52.682228  507257 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37799
	I0116 03:49:52.682743  507257 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:49:52.683402  507257 main.go:141] libmachine: Using API Version  1
	I0116 03:49:52.683425  507257 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:49:52.683719  507257 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36773
	I0116 03:49:52.683893  507257 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:49:52.684125  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetState
	I0116 03:49:52.684589  507257 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:49:52.685108  507257 main.go:141] libmachine: Using API Version  1
	I0116 03:49:52.685128  507257 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:49:52.685607  507257 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:49:52.685627  507257 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42767
	I0116 03:49:52.686073  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetState
	I0116 03:49:52.686329  507257 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:49:52.686781  507257 main.go:141] libmachine: Using API Version  1
	I0116 03:49:52.686804  507257 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:49:52.687167  507257 main.go:141] libmachine: (embed-certs-615980) Calling .DriverName
	I0116 03:49:52.687213  507257 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:49:52.689840  507257 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:49:52.687751  507257 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:49:52.689319  507257 main.go:141] libmachine: (embed-certs-615980) Calling .DriverName
	I0116 03:49:52.691584  507257 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 03:49:52.691595  507257 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0116 03:49:52.691610  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHHostname
	I0116 03:49:52.691655  507257 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:49:52.693170  507257 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0116 03:49:52.694465  507257 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0116 03:49:52.694478  507257 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0116 03:49:52.694495  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHHostname
	I0116 03:49:52.705398  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:49:52.705440  507257 main.go:141] libmachine: (embed-certs-615980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:a6:40", ip: ""} in network mk-embed-certs-615980: {Iface:virbr4 ExpiryTime:2024-01-16 04:44:24 +0000 UTC Type:0 Mac:52:54:00:d4:a6:40 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-615980 Clientid:01:52:54:00:d4:a6:40}
	I0116 03:49:52.705440  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:49:52.705469  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHPort
	I0116 03:49:52.705475  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined IP address 192.168.72.159 and MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:49:52.705501  507257 main.go:141] libmachine: (embed-certs-615980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:a6:40", ip: ""} in network mk-embed-certs-615980: {Iface:virbr4 ExpiryTime:2024-01-16 04:44:24 +0000 UTC Type:0 Mac:52:54:00:d4:a6:40 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-615980 Clientid:01:52:54:00:d4:a6:40}
	I0116 03:49:52.705516  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined IP address 192.168.72.159 and MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:49:52.705403  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHPort
	I0116 03:49:52.705751  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHKeyPath
	I0116 03:49:52.705813  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHKeyPath
	I0116 03:49:52.705956  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHUsername
	I0116 03:49:52.706078  507257 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/embed-certs-615980/id_rsa Username:docker}
	I0116 03:49:52.706839  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHUsername
	I0116 03:49:52.707045  507257 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/embed-certs-615980/id_rsa Username:docker}
	I0116 03:49:52.713247  507257 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33775
	I0116 03:49:52.714047  507257 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:49:52.714725  507257 main.go:141] libmachine: Using API Version  1
	I0116 03:49:52.714742  507257 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:49:52.715212  507257 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:49:52.715442  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetState
	I0116 03:49:52.717568  507257 main.go:141] libmachine: (embed-certs-615980) Calling .DriverName
	I0116 03:49:52.717813  507257 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0116 03:49:52.717824  507257 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0116 03:49:52.717839  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHHostname
	I0116 03:49:52.720720  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:49:52.721189  507257 main.go:141] libmachine: (embed-certs-615980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:a6:40", ip: ""} in network mk-embed-certs-615980: {Iface:virbr4 ExpiryTime:2024-01-16 04:44:24 +0000 UTC Type:0 Mac:52:54:00:d4:a6:40 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-615980 Clientid:01:52:54:00:d4:a6:40}
	I0116 03:49:52.721205  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined IP address 192.168.72.159 and MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:49:52.721414  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHPort
	I0116 03:49:52.721573  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHKeyPath
	I0116 03:49:52.721724  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHUsername
	I0116 03:49:52.721814  507257 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/embed-certs-615980/id_rsa Username:docker}
	I0116 03:49:52.899474  507257 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0116 03:49:52.971597  507257 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0116 03:49:52.971623  507257 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0116 03:49:52.971955  507257 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 03:49:53.029724  507257 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0116 03:49:53.051410  507257 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0116 03:49:53.051439  507257 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0116 03:49:53.121058  507257 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 03:49:53.121088  507257 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0116 03:49:53.179049  507257 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-615980" context rescaled to 1 replicas
	I0116 03:49:53.179098  507257 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.159 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 03:49:53.181191  507257 out.go:177] * Verifying Kubernetes components...
	I0116 03:49:50.633148  507510 pod_ready.go:92] pod "coredns-5644d7b6d9-h85tj" in "kube-system" namespace has status "Ready":"True"
	I0116 03:49:50.633179  507510 pod_ready.go:81] duration metric: took 4.016682348s waiting for pod "coredns-5644d7b6d9-h85tj" in "kube-system" namespace to be "Ready" ...
	I0116 03:49:50.633194  507510 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rc8xt" in "kube-system" namespace to be "Ready" ...
	I0116 03:49:50.648707  507510 pod_ready.go:92] pod "kube-proxy-rc8xt" in "kube-system" namespace has status "Ready":"True"
	I0116 03:49:50.648737  507510 pod_ready.go:81] duration metric: took 15.535257ms waiting for pod "kube-proxy-rc8xt" in "kube-system" namespace to be "Ready" ...
	I0116 03:49:50.648752  507510 pod_ready.go:38] duration metric: took 4.035762868s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:49:50.648770  507510 api_server.go:52] waiting for apiserver process to appear ...
	I0116 03:49:50.648842  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:49:50.665917  507510 api_server.go:72] duration metric: took 5.1323051s to wait for apiserver process to appear ...
	I0116 03:49:50.665954  507510 api_server.go:88] waiting for apiserver healthz status ...
	I0116 03:49:50.665982  507510 api_server.go:253] Checking apiserver healthz at https://192.168.61.167:8443/healthz ...
	I0116 03:49:50.672790  507510 api_server.go:279] https://192.168.61.167:8443/healthz returned 200:
	ok
	I0116 03:49:50.674024  507510 api_server.go:141] control plane version: v1.16.0
	I0116 03:49:50.674059  507510 api_server.go:131] duration metric: took 8.096153ms to wait for apiserver health ...
	I0116 03:49:50.674071  507510 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 03:49:50.677835  507510 system_pods.go:59] 4 kube-system pods found
	I0116 03:49:50.677871  507510 system_pods.go:61] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:49:50.677878  507510 system_pods.go:61] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:49:50.677887  507510 system_pods.go:61] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:49:50.677894  507510 system_pods.go:61] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:49:50.677905  507510 system_pods.go:74] duration metric: took 3.826308ms to wait for pod list to return data ...
	I0116 03:49:50.677914  507510 default_sa.go:34] waiting for default service account to be created ...
	I0116 03:49:50.680932  507510 default_sa.go:45] found service account: "default"
	I0116 03:49:50.680964  507510 default_sa.go:55] duration metric: took 3.041693ms for default service account to be created ...
	I0116 03:49:50.680975  507510 system_pods.go:116] waiting for k8s-apps to be running ...
	I0116 03:49:50.684730  507510 system_pods.go:86] 4 kube-system pods found
	I0116 03:49:50.684759  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:49:50.684767  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:49:50.684778  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:49:50.684785  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:49:50.684811  507510 retry.go:31] will retry after 238.551043ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:49:50.928725  507510 system_pods.go:86] 4 kube-system pods found
	I0116 03:49:50.928761  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:49:50.928768  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:49:50.928779  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:49:50.928786  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:49:50.928816  507510 retry.go:31] will retry after 246.771125ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:49:51.180688  507510 system_pods.go:86] 4 kube-system pods found
	I0116 03:49:51.180727  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:49:51.180736  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:49:51.180747  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:49:51.180755  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:49:51.180780  507510 retry.go:31] will retry after 439.966453ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:49:51.625927  507510 system_pods.go:86] 4 kube-system pods found
	I0116 03:49:51.625958  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:49:51.625964  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:49:51.625970  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:49:51.625975  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:49:51.626001  507510 retry.go:31] will retry after 403.213781ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:49:52.035928  507510 system_pods.go:86] 4 kube-system pods found
	I0116 03:49:52.035994  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:49:52.036003  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:49:52.036014  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:49:52.036022  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:49:52.036064  507510 retry.go:31] will retry after 501.701933ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:49:52.543834  507510 system_pods.go:86] 4 kube-system pods found
	I0116 03:49:52.543874  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:49:52.543883  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:49:52.543894  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:49:52.543904  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:49:52.543929  507510 retry.go:31] will retry after 898.357774ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:49:53.447323  507510 system_pods.go:86] 4 kube-system pods found
	I0116 03:49:53.447356  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:49:53.447364  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:49:53.447373  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:49:53.447382  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:49:53.447405  507510 retry.go:31] will retry after 928.816907ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:49:54.382017  507510 system_pods.go:86] 4 kube-system pods found
	I0116 03:49:54.382046  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:49:54.382052  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:49:54.382058  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:49:54.382065  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:49:54.382085  507510 retry.go:31] will retry after 935.220919ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:49:53.183129  507257 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:49:53.296441  507257 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 03:49:55.162183  507257 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.262649875s)
	I0116 03:49:55.162237  507257 start.go:929] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0116 03:49:55.516930  507257 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.544937669s)
	I0116 03:49:55.516988  507257 main.go:141] libmachine: Making call to close driver server
	I0116 03:49:55.517002  507257 main.go:141] libmachine: (embed-certs-615980) Calling .Close
	I0116 03:49:55.517046  507257 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.487276988s)
	I0116 03:49:55.517101  507257 main.go:141] libmachine: Making call to close driver server
	I0116 03:49:55.517108  507257 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.333941337s)
	I0116 03:49:55.517114  507257 main.go:141] libmachine: (embed-certs-615980) Calling .Close
	I0116 03:49:55.517135  507257 node_ready.go:35] waiting up to 6m0s for node "embed-certs-615980" to be "Ready" ...
	I0116 03:49:55.517496  507257 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:49:55.517496  507257 main.go:141] libmachine: (embed-certs-615980) DBG | Closing plugin on server side
	I0116 03:49:55.517512  507257 main.go:141] libmachine: (embed-certs-615980) DBG | Closing plugin on server side
	I0116 03:49:55.517520  507257 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:49:55.517535  507257 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:49:55.517546  507257 main.go:141] libmachine: Making call to close driver server
	I0116 03:49:55.517548  507257 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:49:55.517559  507257 main.go:141] libmachine: (embed-certs-615980) Calling .Close
	I0116 03:49:55.517566  507257 main.go:141] libmachine: Making call to close driver server
	I0116 03:49:55.517577  507257 main.go:141] libmachine: (embed-certs-615980) Calling .Close
	I0116 03:49:55.517902  507257 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:49:55.517916  507257 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:49:55.517920  507257 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:49:55.517926  507257 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:49:55.537242  507257 node_ready.go:49] node "embed-certs-615980" has status "Ready":"True"
	I0116 03:49:55.537273  507257 node_ready.go:38] duration metric: took 20.119969ms waiting for node "embed-certs-615980" to be "Ready" ...
	I0116 03:49:55.537282  507257 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:49:55.567823  507257 main.go:141] libmachine: Making call to close driver server
	I0116 03:49:55.567859  507257 main.go:141] libmachine: (embed-certs-615980) Calling .Close
	I0116 03:49:55.568264  507257 main.go:141] libmachine: (embed-certs-615980) DBG | Closing plugin on server side
	I0116 03:49:55.568301  507257 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:49:55.568324  507257 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:49:55.571667  507257 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-hxsvz" in "kube-system" namespace to be "Ready" ...
	I0116 03:49:55.962821  507257 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.666330022s)
	I0116 03:49:55.962896  507257 main.go:141] libmachine: Making call to close driver server
	I0116 03:49:55.962915  507257 main.go:141] libmachine: (embed-certs-615980) Calling .Close
	I0116 03:49:55.963282  507257 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:49:55.963304  507257 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:49:55.963317  507257 main.go:141] libmachine: Making call to close driver server
	I0116 03:49:55.963328  507257 main.go:141] libmachine: (embed-certs-615980) Calling .Close
	I0116 03:49:55.964155  507257 main.go:141] libmachine: (embed-certs-615980) DBG | Closing plugin on server side
	I0116 03:49:55.964178  507257 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:49:55.964190  507257 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:49:55.964209  507257 addons.go:470] Verifying addon metrics-server=true in "embed-certs-615980"
	I0116 03:49:55.967489  507257 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0116 03:49:55.969099  507257 addons.go:505] enable addons completed in 3.329750862s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0116 03:49:57.085999  507257 pod_ready.go:92] pod "coredns-5dd5756b68-hxsvz" in "kube-system" namespace has status "Ready":"True"
	I0116 03:49:57.086034  507257 pod_ready.go:81] duration metric: took 1.514340062s waiting for pod "coredns-5dd5756b68-hxsvz" in "kube-system" namespace to be "Ready" ...
	I0116 03:49:57.086048  507257 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-twbhh" in "kube-system" namespace to be "Ready" ...
	I0116 03:49:57.110886  507257 pod_ready.go:92] pod "coredns-5dd5756b68-twbhh" in "kube-system" namespace has status "Ready":"True"
	I0116 03:49:57.110920  507257 pod_ready.go:81] duration metric: took 24.862165ms waiting for pod "coredns-5dd5756b68-twbhh" in "kube-system" namespace to be "Ready" ...
	I0116 03:49:57.110934  507257 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-615980" in "kube-system" namespace to be "Ready" ...
	I0116 03:49:57.122556  507257 pod_ready.go:92] pod "etcd-embed-certs-615980" in "kube-system" namespace has status "Ready":"True"
	I0116 03:49:57.122588  507257 pod_ready.go:81] duration metric: took 11.643561ms waiting for pod "etcd-embed-certs-615980" in "kube-system" namespace to be "Ready" ...
	I0116 03:49:57.122601  507257 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-615980" in "kube-system" namespace to be "Ready" ...
	I0116 03:49:57.134402  507257 pod_ready.go:92] pod "kube-apiserver-embed-certs-615980" in "kube-system" namespace has status "Ready":"True"
	I0116 03:49:57.134432  507257 pod_ready.go:81] duration metric: took 11.823016ms waiting for pod "kube-apiserver-embed-certs-615980" in "kube-system" namespace to be "Ready" ...
	I0116 03:49:57.134442  507257 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-615980" in "kube-system" namespace to be "Ready" ...
	I0116 03:49:57.152947  507257 pod_ready.go:92] pod "kube-controller-manager-embed-certs-615980" in "kube-system" namespace has status "Ready":"True"
	I0116 03:49:57.152984  507257 pod_ready.go:81] duration metric: took 18.533642ms waiting for pod "kube-controller-manager-embed-certs-615980" in "kube-system" namespace to be "Ready" ...
	I0116 03:49:57.153000  507257 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8rkb5" in "kube-system" namespace to be "Ready" ...
	I0116 03:49:57.921983  507257 pod_ready.go:92] pod "kube-proxy-8rkb5" in "kube-system" namespace has status "Ready":"True"
	I0116 03:49:57.922016  507257 pod_ready.go:81] duration metric: took 769.007434ms waiting for pod "kube-proxy-8rkb5" in "kube-system" namespace to be "Ready" ...
	I0116 03:49:57.922028  507257 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-615980" in "kube-system" namespace to be "Ready" ...
	I0116 03:49:58.322237  507257 pod_ready.go:92] pod "kube-scheduler-embed-certs-615980" in "kube-system" namespace has status "Ready":"True"
	I0116 03:49:58.322267  507257 pod_ready.go:81] duration metric: took 400.23243ms waiting for pod "kube-scheduler-embed-certs-615980" in "kube-system" namespace to be "Ready" ...
	I0116 03:49:58.322280  507257 pod_ready.go:38] duration metric: took 2.78498776s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:49:58.322295  507257 api_server.go:52] waiting for apiserver process to appear ...
	I0116 03:49:58.322357  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:49:58.338527  507257 api_server.go:72] duration metric: took 5.159388866s to wait for apiserver process to appear ...
	I0116 03:49:58.338553  507257 api_server.go:88] waiting for apiserver healthz status ...
	I0116 03:49:58.338575  507257 api_server.go:253] Checking apiserver healthz at https://192.168.72.159:8443/healthz ...
	I0116 03:49:58.345758  507257 api_server.go:279] https://192.168.72.159:8443/healthz returned 200:
	ok
	I0116 03:49:58.347531  507257 api_server.go:141] control plane version: v1.28.4
	I0116 03:49:58.347559  507257 api_server.go:131] duration metric: took 8.999388ms to wait for apiserver health ...
	I0116 03:49:58.347573  507257 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 03:49:58.527633  507257 system_pods.go:59] 9 kube-system pods found
	I0116 03:49:58.527676  507257 system_pods.go:61] "coredns-5dd5756b68-hxsvz" [de7da02c-649b-4d29-8a89-5642105b6049] Running
	I0116 03:49:58.527685  507257 system_pods.go:61] "coredns-5dd5756b68-twbhh" [9be49c16-f213-47da-83f4-90fc392eb49e] Running
	I0116 03:49:58.527692  507257 system_pods.go:61] "etcd-embed-certs-615980" [2098148f-0cac-48ce-a607-381b13334438] Running
	I0116 03:49:58.527704  507257 system_pods.go:61] "kube-apiserver-embed-certs-615980" [3d49b47b-da34-4f4d-a8d3-758c0d28c034] Running
	I0116 03:49:58.527711  507257 system_pods.go:61] "kube-controller-manager-embed-certs-615980" [c4f7946d-907d-42ad-8e84-8fa337111688] Running
	I0116 03:49:58.527718  507257 system_pods.go:61] "kube-proxy-8rkb5" [322fae38-3b29-4135-ba3f-c0ff8bda1e4a] Running
	I0116 03:49:58.527725  507257 system_pods.go:61] "kube-scheduler-embed-certs-615980" [882f322f-8686-40a4-a613-e9855ccfb56e] Running
	I0116 03:49:58.527736  507257 system_pods.go:61] "metrics-server-57f55c9bc5-fc7tx" [14a38c13-7a9e-4548-9654-c568ede29e0f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:49:58.527748  507257 system_pods.go:61] "storage-provisioner" [1ce752ad-ce91-462e-ab2b-2af64064eb40] Running
	I0116 03:49:58.527757  507257 system_pods.go:74] duration metric: took 180.177482ms to wait for pod list to return data ...
	I0116 03:49:58.527771  507257 default_sa.go:34] waiting for default service account to be created ...
	I0116 03:49:58.721717  507257 default_sa.go:45] found service account: "default"
	I0116 03:49:58.721749  507257 default_sa.go:55] duration metric: took 193.967755ms for default service account to be created ...
	I0116 03:49:58.721758  507257 system_pods.go:116] waiting for k8s-apps to be running ...
	I0116 03:49:58.925915  507257 system_pods.go:86] 9 kube-system pods found
	I0116 03:49:58.925957  507257 system_pods.go:89] "coredns-5dd5756b68-hxsvz" [de7da02c-649b-4d29-8a89-5642105b6049] Running
	I0116 03:49:58.925964  507257 system_pods.go:89] "coredns-5dd5756b68-twbhh" [9be49c16-f213-47da-83f4-90fc392eb49e] Running
	I0116 03:49:58.925970  507257 system_pods.go:89] "etcd-embed-certs-615980" [2098148f-0cac-48ce-a607-381b13334438] Running
	I0116 03:49:58.925977  507257 system_pods.go:89] "kube-apiserver-embed-certs-615980" [3d49b47b-da34-4f4d-a8d3-758c0d28c034] Running
	I0116 03:49:58.925987  507257 system_pods.go:89] "kube-controller-manager-embed-certs-615980" [c4f7946d-907d-42ad-8e84-8fa337111688] Running
	I0116 03:49:58.925994  507257 system_pods.go:89] "kube-proxy-8rkb5" [322fae38-3b29-4135-ba3f-c0ff8bda1e4a] Running
	I0116 03:49:58.926040  507257 system_pods.go:89] "kube-scheduler-embed-certs-615980" [882f322f-8686-40a4-a613-e9855ccfb56e] Running
	I0116 03:49:58.926063  507257 system_pods.go:89] "metrics-server-57f55c9bc5-fc7tx" [14a38c13-7a9e-4548-9654-c568ede29e0f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:49:58.926070  507257 system_pods.go:89] "storage-provisioner" [1ce752ad-ce91-462e-ab2b-2af64064eb40] Running
	I0116 03:49:58.926087  507257 system_pods.go:126] duration metric: took 204.321811ms to wait for k8s-apps to be running ...
	I0116 03:49:58.926099  507257 system_svc.go:44] waiting for kubelet service to be running ....
	I0116 03:49:58.926159  507257 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:49:58.940982  507257 system_svc.go:56] duration metric: took 14.86844ms WaitForService to wait for kubelet.
	I0116 03:49:58.941019  507257 kubeadm.go:581] duration metric: took 5.761889406s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0116 03:49:58.941051  507257 node_conditions.go:102] verifying NodePressure condition ...
	I0116 03:49:59.121649  507257 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 03:49:59.121681  507257 node_conditions.go:123] node cpu capacity is 2
	I0116 03:49:59.121694  507257 node_conditions.go:105] duration metric: took 180.636851ms to run NodePressure ...
	I0116 03:49:59.121707  507257 start.go:228] waiting for startup goroutines ...
	I0116 03:49:59.121717  507257 start.go:233] waiting for cluster config update ...
	I0116 03:49:59.121730  507257 start.go:242] writing updated cluster config ...
	I0116 03:49:59.122058  507257 ssh_runner.go:195] Run: rm -f paused
	I0116 03:49:59.177472  507257 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0116 03:49:59.179801  507257 out.go:177] * Done! kubectl is now configured to use "embed-certs-615980" cluster and "default" namespace by default
	I0116 03:49:55.324439  507510 system_pods.go:86] 4 kube-system pods found
	I0116 03:49:55.324471  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:49:55.324477  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:49:55.324484  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:49:55.324489  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:49:55.324509  507510 retry.go:31] will retry after 1.168298317s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:49:56.500050  507510 system_pods.go:86] 4 kube-system pods found
	I0116 03:49:56.500090  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:49:56.500098  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:49:56.500111  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:49:56.500118  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:49:56.500142  507510 retry.go:31] will retry after 1.453657977s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:49:57.961220  507510 system_pods.go:86] 4 kube-system pods found
	I0116 03:49:57.961248  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:49:57.961254  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:49:57.961261  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:49:57.961266  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:49:57.961286  507510 retry.go:31] will retry after 1.763969687s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:49:59.731086  507510 system_pods.go:86] 4 kube-system pods found
	I0116 03:49:59.731112  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:49:59.731117  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:49:59.731123  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:49:59.731129  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:49:59.731147  507510 retry.go:31] will retry after 3.185395035s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:50:02.922897  507510 system_pods.go:86] 4 kube-system pods found
	I0116 03:50:02.922934  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:50:02.922944  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:50:02.922954  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:50:02.922961  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:50:02.922985  507510 retry.go:31] will retry after 4.049428323s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:50:06.978002  507510 system_pods.go:86] 4 kube-system pods found
	I0116 03:50:06.978029  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:50:06.978034  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:50:06.978040  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:50:06.978045  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:50:06.978063  507510 retry.go:31] will retry after 4.626513574s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:50:11.610464  507510 system_pods.go:86] 4 kube-system pods found
	I0116 03:50:11.610499  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:50:11.610507  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:50:11.610517  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:50:11.610524  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:50:11.610550  507510 retry.go:31] will retry after 4.683195792s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:50:16.298843  507510 system_pods.go:86] 4 kube-system pods found
	I0116 03:50:16.298873  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:50:16.298879  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:50:16.298888  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:50:16.298892  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:50:16.298913  507510 retry.go:31] will retry after 8.214175219s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:50:24.520982  507510 system_pods.go:86] 5 kube-system pods found
	I0116 03:50:24.521020  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:50:24.521029  507510 system_pods.go:89] "etcd-old-k8s-version-696770" [255a0b2a-df41-4621-8918-36e1b0c25c24] Pending
	I0116 03:50:24.521033  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:50:24.521040  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:50:24.521045  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:50:24.521067  507510 retry.go:31] will retry after 9.626598035s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:50:34.155753  507510 system_pods.go:86] 5 kube-system pods found
	I0116 03:50:34.155790  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:50:34.155798  507510 system_pods.go:89] "etcd-old-k8s-version-696770" [255a0b2a-df41-4621-8918-36e1b0c25c24] Running
	I0116 03:50:34.155805  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:50:34.155815  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:50:34.155822  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:50:34.155849  507510 retry.go:31] will retry after 13.760629262s: missing components: kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:50:47.923537  507510 system_pods.go:86] 7 kube-system pods found
	I0116 03:50:47.923571  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:50:47.923577  507510 system_pods.go:89] "etcd-old-k8s-version-696770" [255a0b2a-df41-4621-8918-36e1b0c25c24] Running
	I0116 03:50:47.923582  507510 system_pods.go:89] "kube-apiserver-old-k8s-version-696770" [c682b257-d00b-4b4c-8089-cda1b9da538c] Running
	I0116 03:50:47.923585  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:50:47.923589  507510 system_pods.go:89] "kube-scheduler-old-k8s-version-696770" [af271425-aec7-45d9-97c5-9a033f13a41e] Running
	I0116 03:50:47.923599  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:50:47.923603  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:50:47.923621  507510 retry.go:31] will retry after 15.810378345s: missing components: kube-controller-manager
	I0116 03:51:03.742786  507510 system_pods.go:86] 8 kube-system pods found
	I0116 03:51:03.742819  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:51:03.742825  507510 system_pods.go:89] "etcd-old-k8s-version-696770" [255a0b2a-df41-4621-8918-36e1b0c25c24] Running
	I0116 03:51:03.742830  507510 system_pods.go:89] "kube-apiserver-old-k8s-version-696770" [c682b257-d00b-4b4c-8089-cda1b9da538c] Running
	I0116 03:51:03.742835  507510 system_pods.go:89] "kube-controller-manager-old-k8s-version-696770" [87b5ef82-182e-458d-b521-05a36d3d031b] Running
	I0116 03:51:03.742838  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:51:03.742842  507510 system_pods.go:89] "kube-scheduler-old-k8s-version-696770" [af271425-aec7-45d9-97c5-9a033f13a41e] Running
	I0116 03:51:03.742849  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:51:03.742854  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:51:03.742865  507510 system_pods.go:126] duration metric: took 1m13.061883389s to wait for k8s-apps to be running ...
	I0116 03:51:03.742872  507510 system_svc.go:44] waiting for kubelet service to be running ....
	I0116 03:51:03.742921  507510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:51:03.761399  507510 system_svc.go:56] duration metric: took 18.514586ms WaitForService to wait for kubelet.
	I0116 03:51:03.761433  507510 kubeadm.go:581] duration metric: took 1m18.22783177s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0116 03:51:03.761461  507510 node_conditions.go:102] verifying NodePressure condition ...
	I0116 03:51:03.765716  507510 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 03:51:03.765760  507510 node_conditions.go:123] node cpu capacity is 2
	I0116 03:51:03.765777  507510 node_conditions.go:105] duration metric: took 4.309124ms to run NodePressure ...
	I0116 03:51:03.765794  507510 start.go:228] waiting for startup goroutines ...
	I0116 03:51:03.765803  507510 start.go:233] waiting for cluster config update ...
	I0116 03:51:03.765817  507510 start.go:242] writing updated cluster config ...
	I0116 03:51:03.766160  507510 ssh_runner.go:195] Run: rm -f paused
	I0116 03:51:03.822502  507510 start.go:600] kubectl: 1.29.0, cluster: 1.16.0 (minor skew: 13)
	I0116 03:51:03.824687  507510 out.go:177] 
	W0116 03:51:03.826162  507510 out.go:239] ! /usr/local/bin/kubectl is version 1.29.0, which may have incompatibilities with Kubernetes 1.16.0.
	I0116 03:51:03.827659  507510 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0116 03:51:03.829229  507510 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-696770" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Tue 2024-01-16 03:43:14 UTC, ends at Tue 2024-01-16 03:57:34 UTC. --
	Jan 16 03:57:34 no-preload-666547 crio[708]: time="2024-01-16 03:57:34.088126414Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705377454088105659,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=81f6c1a3-b929-43aa-9c2b-60b72e1a010e name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 03:57:34 no-preload-666547 crio[708]: time="2024-01-16 03:57:34.088647082Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=85b47ce0-d3d1-4e9a-a756-3f201c3cc7f0 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:57:34 no-preload-666547 crio[708]: time="2024-01-16 03:57:34.088694558Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=85b47ce0-d3d1-4e9a-a756-3f201c3cc7f0 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:57:34 no-preload-666547 crio[708]: time="2024-01-16 03:57:34.088917989Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b8effe57dcc580d7342341d0d35cd5e26c09ec7ad9caa9eef6f0cd1d2dac7cd9,PodSandboxId:0e795dcf8bdf3a6454fa74aa6c979dedb736fe886bb6577315992cb4b9c012ae,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1705376654821364357,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2aefa743-29a1-416e-be78-70088fafa6ae,},Annotations:map[string]string{io.kubernetes.container.hash: 94ee9ba2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c13ef036a1014a6bbc5c179a0889fb6ff615988713589da58240b7c637adf687,PodSandboxId:7aeadfa43aff8374db2de3bea11ab2f9e1af5b636830272eed8e50690bf6d19b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1705376653300473303,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-lr95b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15dc0b11-f7ec-4729-bbfa-79b9649fbad6,},Annotations:map[string]string{io.kubernetes.container.hash: 8f017cc4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7164c1b7732cf6d54642cb9ffc8d9f16696adc0e428c3fa16cf914420d73e1d,PodSandboxId:9caee2186036cf5d1af54ff074972975db598d01c1c01fc542bece51e9dfc11e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1705376646449512147,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: f4e1ba45-217d-41d5-b583-2f60044879bc,},Annotations:map[string]string{io.kubernetes.container.hash: 19213996,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59754e94eb3cf3fecfedb9077fbad2ecc2618c351fa9bfa3faed0d780fdd9ecb,PodSandboxId:9caee2186036cf5d1af54ff074972975db598d01c1c01fc542bece51e9dfc11e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1705376645286742316,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: f4e1ba45-217d-41d5-b583-2f60044879bc,},Annotations:map[string]string{io.kubernetes.container.hash: 19213996,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eba2964f029ac61d9cfb59dc548ae05c832f9b23713f51924291fbfc5de985ed,PodSandboxId:4f18d218883ecd1534290daa913264acbf65c6e4a8ad219b1d044c0f6d74ab50,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1705376645196881524,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dcmrn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e91c96f-cbc
5-424d-a09e-06e34bf7a2e2,},Annotations:map[string]string{io.kubernetes.container.hash: 97531c65,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33381edd7dded1b5ed158738429b4a7f1f03a6171f2af0832bb5a9864c950725,PodSandboxId:c8e26467ca147bef4373910a371d91fd745bfd4245dc6376ea28d683d6cb2355,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1705376639199150218,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-666547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2443bec62d62ae9acf
9e06442ec207b,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01aaf51cd40b9137f2395404bb15b7372927f81dc8a4e884600dc8cba9bbeb8e,PodSandboxId:0f9fe038b55a26455f4590da34c8e63e98329432435798e09fcfb15225cc873e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1705376639067435928,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-666547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5216174c445390e2fea097e8be444c01,},Annotations:map[string]string{io.ku
bernetes.container.hash: 54326c6b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:802d4c55aa04370ff079f09dfe4670f5dac1142ca6db880afa17be75f1d20a76,PodSandboxId:44000096b31d5b12f18dfbffbab8b31fb45b919c2f1d37d67b235b97d02cf247,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1705376638959826759,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-666547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55f2773f8d96731e38a7898f4239f269,},Annotation
s:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de79f87bc28449077936386f603cc5df933f63455bc96cf6ff7e7c6e4bd32bc4,PodSandboxId:7958f1d33200c86dba5755a1cc3afdc2e3f5ef304384d144976b0b39972f197e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1705376638560110741,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-666547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed86a5f0d67f31d8a75b6d9733aaf4df,},Annotations:map[strin
g]string{io.kubernetes.container.hash: f03ae34,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=85b47ce0-d3d1-4e9a-a756-3f201c3cc7f0 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:57:34 no-preload-666547 crio[708]: time="2024-01-16 03:57:34.134323089Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=88fa97e3-11c3-4da9-8d39-6b1b5e0294ba name=/runtime.v1.RuntimeService/Version
	Jan 16 03:57:34 no-preload-666547 crio[708]: time="2024-01-16 03:57:34.134411960Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=88fa97e3-11c3-4da9-8d39-6b1b5e0294ba name=/runtime.v1.RuntimeService/Version
	Jan 16 03:57:34 no-preload-666547 crio[708]: time="2024-01-16 03:57:34.136063648Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=c5a4b676-d1cb-4ded-bf68-2b47db5ff4ed name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 03:57:34 no-preload-666547 crio[708]: time="2024-01-16 03:57:34.136422034Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705377454136406324,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=c5a4b676-d1cb-4ded-bf68-2b47db5ff4ed name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 03:57:34 no-preload-666547 crio[708]: time="2024-01-16 03:57:34.137020804Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b04afae0-d63c-4294-9e3b-460ec6bbe9de name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:57:34 no-preload-666547 crio[708]: time="2024-01-16 03:57:34.137068780Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b04afae0-d63c-4294-9e3b-460ec6bbe9de name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:57:34 no-preload-666547 crio[708]: time="2024-01-16 03:57:34.137249826Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b8effe57dcc580d7342341d0d35cd5e26c09ec7ad9caa9eef6f0cd1d2dac7cd9,PodSandboxId:0e795dcf8bdf3a6454fa74aa6c979dedb736fe886bb6577315992cb4b9c012ae,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1705376654821364357,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2aefa743-29a1-416e-be78-70088fafa6ae,},Annotations:map[string]string{io.kubernetes.container.hash: 94ee9ba2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c13ef036a1014a6bbc5c179a0889fb6ff615988713589da58240b7c637adf687,PodSandboxId:7aeadfa43aff8374db2de3bea11ab2f9e1af5b636830272eed8e50690bf6d19b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1705376653300473303,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-lr95b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15dc0b11-f7ec-4729-bbfa-79b9649fbad6,},Annotations:map[string]string{io.kubernetes.container.hash: 8f017cc4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7164c1b7732cf6d54642cb9ffc8d9f16696adc0e428c3fa16cf914420d73e1d,PodSandboxId:9caee2186036cf5d1af54ff074972975db598d01c1c01fc542bece51e9dfc11e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1705376646449512147,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: f4e1ba45-217d-41d5-b583-2f60044879bc,},Annotations:map[string]string{io.kubernetes.container.hash: 19213996,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59754e94eb3cf3fecfedb9077fbad2ecc2618c351fa9bfa3faed0d780fdd9ecb,PodSandboxId:9caee2186036cf5d1af54ff074972975db598d01c1c01fc542bece51e9dfc11e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1705376645286742316,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: f4e1ba45-217d-41d5-b583-2f60044879bc,},Annotations:map[string]string{io.kubernetes.container.hash: 19213996,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eba2964f029ac61d9cfb59dc548ae05c832f9b23713f51924291fbfc5de985ed,PodSandboxId:4f18d218883ecd1534290daa913264acbf65c6e4a8ad219b1d044c0f6d74ab50,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1705376645196881524,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dcmrn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e91c96f-cbc
5-424d-a09e-06e34bf7a2e2,},Annotations:map[string]string{io.kubernetes.container.hash: 97531c65,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33381edd7dded1b5ed158738429b4a7f1f03a6171f2af0832bb5a9864c950725,PodSandboxId:c8e26467ca147bef4373910a371d91fd745bfd4245dc6376ea28d683d6cb2355,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1705376639199150218,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-666547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2443bec62d62ae9acf
9e06442ec207b,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01aaf51cd40b9137f2395404bb15b7372927f81dc8a4e884600dc8cba9bbeb8e,PodSandboxId:0f9fe038b55a26455f4590da34c8e63e98329432435798e09fcfb15225cc873e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1705376639067435928,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-666547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5216174c445390e2fea097e8be444c01,},Annotations:map[string]string{io.ku
bernetes.container.hash: 54326c6b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:802d4c55aa04370ff079f09dfe4670f5dac1142ca6db880afa17be75f1d20a76,PodSandboxId:44000096b31d5b12f18dfbffbab8b31fb45b919c2f1d37d67b235b97d02cf247,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1705376638959826759,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-666547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55f2773f8d96731e38a7898f4239f269,},Annotation
s:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de79f87bc28449077936386f603cc5df933f63455bc96cf6ff7e7c6e4bd32bc4,PodSandboxId:7958f1d33200c86dba5755a1cc3afdc2e3f5ef304384d144976b0b39972f197e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1705376638560110741,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-666547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed86a5f0d67f31d8a75b6d9733aaf4df,},Annotations:map[strin
g]string{io.kubernetes.container.hash: f03ae34,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b04afae0-d63c-4294-9e3b-460ec6bbe9de name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:57:34 no-preload-666547 crio[708]: time="2024-01-16 03:57:34.182322356Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=ef3fba52-5c36-45dd-b83f-8832a9fe5d44 name=/runtime.v1.RuntimeService/Version
	Jan 16 03:57:34 no-preload-666547 crio[708]: time="2024-01-16 03:57:34.182397482Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=ef3fba52-5c36-45dd-b83f-8832a9fe5d44 name=/runtime.v1.RuntimeService/Version
	Jan 16 03:57:34 no-preload-666547 crio[708]: time="2024-01-16 03:57:34.183759749Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=766da74f-befd-4d24-822c-1c336603737a name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 03:57:34 no-preload-666547 crio[708]: time="2024-01-16 03:57:34.184199476Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705377454184185213,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=766da74f-befd-4d24-822c-1c336603737a name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 03:57:34 no-preload-666547 crio[708]: time="2024-01-16 03:57:34.196123648Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=d23ed1f7-dda3-44c7-999a-27fe81f763f8 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:57:34 no-preload-666547 crio[708]: time="2024-01-16 03:57:34.196221292Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=d23ed1f7-dda3-44c7-999a-27fe81f763f8 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:57:34 no-preload-666547 crio[708]: time="2024-01-16 03:57:34.196818920Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b8effe57dcc580d7342341d0d35cd5e26c09ec7ad9caa9eef6f0cd1d2dac7cd9,PodSandboxId:0e795dcf8bdf3a6454fa74aa6c979dedb736fe886bb6577315992cb4b9c012ae,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1705376654821364357,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2aefa743-29a1-416e-be78-70088fafa6ae,},Annotations:map[string]string{io.kubernetes.container.hash: 94ee9ba2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c13ef036a1014a6bbc5c179a0889fb6ff615988713589da58240b7c637adf687,PodSandboxId:7aeadfa43aff8374db2de3bea11ab2f9e1af5b636830272eed8e50690bf6d19b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1705376653300473303,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-lr95b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15dc0b11-f7ec-4729-bbfa-79b9649fbad6,},Annotations:map[string]string{io.kubernetes.container.hash: 8f017cc4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7164c1b7732cf6d54642cb9ffc8d9f16696adc0e428c3fa16cf914420d73e1d,PodSandboxId:9caee2186036cf5d1af54ff074972975db598d01c1c01fc542bece51e9dfc11e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1705376646449512147,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: f4e1ba45-217d-41d5-b583-2f60044879bc,},Annotations:map[string]string{io.kubernetes.container.hash: 19213996,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59754e94eb3cf3fecfedb9077fbad2ecc2618c351fa9bfa3faed0d780fdd9ecb,PodSandboxId:9caee2186036cf5d1af54ff074972975db598d01c1c01fc542bece51e9dfc11e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1705376645286742316,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: f4e1ba45-217d-41d5-b583-2f60044879bc,},Annotations:map[string]string{io.kubernetes.container.hash: 19213996,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eba2964f029ac61d9cfb59dc548ae05c832f9b23713f51924291fbfc5de985ed,PodSandboxId:4f18d218883ecd1534290daa913264acbf65c6e4a8ad219b1d044c0f6d74ab50,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1705376645196881524,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dcmrn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e91c96f-cbc
5-424d-a09e-06e34bf7a2e2,},Annotations:map[string]string{io.kubernetes.container.hash: 97531c65,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33381edd7dded1b5ed158738429b4a7f1f03a6171f2af0832bb5a9864c950725,PodSandboxId:c8e26467ca147bef4373910a371d91fd745bfd4245dc6376ea28d683d6cb2355,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1705376639199150218,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-666547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2443bec62d62ae9acf
9e06442ec207b,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01aaf51cd40b9137f2395404bb15b7372927f81dc8a4e884600dc8cba9bbeb8e,PodSandboxId:0f9fe038b55a26455f4590da34c8e63e98329432435798e09fcfb15225cc873e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1705376639067435928,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-666547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5216174c445390e2fea097e8be444c01,},Annotations:map[string]string{io.ku
bernetes.container.hash: 54326c6b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:802d4c55aa04370ff079f09dfe4670f5dac1142ca6db880afa17be75f1d20a76,PodSandboxId:44000096b31d5b12f18dfbffbab8b31fb45b919c2f1d37d67b235b97d02cf247,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1705376638959826759,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-666547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55f2773f8d96731e38a7898f4239f269,},Annotation
s:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de79f87bc28449077936386f603cc5df933f63455bc96cf6ff7e7c6e4bd32bc4,PodSandboxId:7958f1d33200c86dba5755a1cc3afdc2e3f5ef304384d144976b0b39972f197e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1705376638560110741,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-666547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed86a5f0d67f31d8a75b6d9733aaf4df,},Annotations:map[strin
g]string{io.kubernetes.container.hash: f03ae34,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=d23ed1f7-dda3-44c7-999a-27fe81f763f8 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:57:34 no-preload-666547 crio[708]: time="2024-01-16 03:57:34.254799478Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=6f046eee-10e5-421b-8434-9379bda70825 name=/runtime.v1.RuntimeService/Version
	Jan 16 03:57:34 no-preload-666547 crio[708]: time="2024-01-16 03:57:34.254916817Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=6f046eee-10e5-421b-8434-9379bda70825 name=/runtime.v1.RuntimeService/Version
	Jan 16 03:57:34 no-preload-666547 crio[708]: time="2024-01-16 03:57:34.256814389Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=e9b70970-4d72-49bc-95dd-3cdffa4378c8 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 03:57:34 no-preload-666547 crio[708]: time="2024-01-16 03:57:34.257226635Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705377454257212410,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=e9b70970-4d72-49bc-95dd-3cdffa4378c8 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 03:57:34 no-preload-666547 crio[708]: time="2024-01-16 03:57:34.258344650Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a84b522a-3c57-4288-8402-e6803c31b1dd name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:57:34 no-preload-666547 crio[708]: time="2024-01-16 03:57:34.258407291Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a84b522a-3c57-4288-8402-e6803c31b1dd name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:57:34 no-preload-666547 crio[708]: time="2024-01-16 03:57:34.258723480Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b8effe57dcc580d7342341d0d35cd5e26c09ec7ad9caa9eef6f0cd1d2dac7cd9,PodSandboxId:0e795dcf8bdf3a6454fa74aa6c979dedb736fe886bb6577315992cb4b9c012ae,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1705376654821364357,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2aefa743-29a1-416e-be78-70088fafa6ae,},Annotations:map[string]string{io.kubernetes.container.hash: 94ee9ba2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c13ef036a1014a6bbc5c179a0889fb6ff615988713589da58240b7c637adf687,PodSandboxId:7aeadfa43aff8374db2de3bea11ab2f9e1af5b636830272eed8e50690bf6d19b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1705376653300473303,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-lr95b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15dc0b11-f7ec-4729-bbfa-79b9649fbad6,},Annotations:map[string]string{io.kubernetes.container.hash: 8f017cc4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7164c1b7732cf6d54642cb9ffc8d9f16696adc0e428c3fa16cf914420d73e1d,PodSandboxId:9caee2186036cf5d1af54ff074972975db598d01c1c01fc542bece51e9dfc11e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1705376646449512147,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: f4e1ba45-217d-41d5-b583-2f60044879bc,},Annotations:map[string]string{io.kubernetes.container.hash: 19213996,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59754e94eb3cf3fecfedb9077fbad2ecc2618c351fa9bfa3faed0d780fdd9ecb,PodSandboxId:9caee2186036cf5d1af54ff074972975db598d01c1c01fc542bece51e9dfc11e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1705376645286742316,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: f4e1ba45-217d-41d5-b583-2f60044879bc,},Annotations:map[string]string{io.kubernetes.container.hash: 19213996,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eba2964f029ac61d9cfb59dc548ae05c832f9b23713f51924291fbfc5de985ed,PodSandboxId:4f18d218883ecd1534290daa913264acbf65c6e4a8ad219b1d044c0f6d74ab50,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1705376645196881524,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dcmrn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e91c96f-cbc
5-424d-a09e-06e34bf7a2e2,},Annotations:map[string]string{io.kubernetes.container.hash: 97531c65,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33381edd7dded1b5ed158738429b4a7f1f03a6171f2af0832bb5a9864c950725,PodSandboxId:c8e26467ca147bef4373910a371d91fd745bfd4245dc6376ea28d683d6cb2355,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1705376639199150218,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-666547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2443bec62d62ae9acf
9e06442ec207b,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01aaf51cd40b9137f2395404bb15b7372927f81dc8a4e884600dc8cba9bbeb8e,PodSandboxId:0f9fe038b55a26455f4590da34c8e63e98329432435798e09fcfb15225cc873e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1705376639067435928,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-666547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5216174c445390e2fea097e8be444c01,},Annotations:map[string]string{io.ku
bernetes.container.hash: 54326c6b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:802d4c55aa04370ff079f09dfe4670f5dac1142ca6db880afa17be75f1d20a76,PodSandboxId:44000096b31d5b12f18dfbffbab8b31fb45b919c2f1d37d67b235b97d02cf247,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1705376638959826759,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-666547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55f2773f8d96731e38a7898f4239f269,},Annotation
s:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de79f87bc28449077936386f603cc5df933f63455bc96cf6ff7e7c6e4bd32bc4,PodSandboxId:7958f1d33200c86dba5755a1cc3afdc2e3f5ef304384d144976b0b39972f197e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1705376638560110741,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-666547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed86a5f0d67f31d8a75b6d9733aaf4df,},Annotations:map[strin
g]string{io.kubernetes.container.hash: f03ae34,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a84b522a-3c57-4288-8402-e6803c31b1dd name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b8effe57dcc58       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   0e795dcf8bdf3       busybox
	c13ef036a1014       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   1                   7aeadfa43aff8       coredns-76f75df574-lr95b
	b7164c1b7732c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Running             storage-provisioner       3                   9caee2186036c       storage-provisioner
	59754e94eb3cf       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       2                   9caee2186036c       storage-provisioner
	eba2964f029ac       cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834                                      13 minutes ago      Running             kube-proxy                1                   4f18d218883ec       kube-proxy-dcmrn
	33381edd7dded       4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210                                      13 minutes ago      Running             kube-scheduler            1                   c8e26467ca147       kube-scheduler-no-preload-666547
	01aaf51cd40b9       a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7                                      13 minutes ago      Running             etcd                      1                   0f9fe038b55a2       etcd-no-preload-666547
	802d4c55aa043       d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d                                      13 minutes ago      Running             kube-controller-manager   1                   44000096b31d5       kube-controller-manager-no-preload-666547
	de79f87bc2844       bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f                                      13 minutes ago      Running             kube-apiserver            1                   7958f1d33200c       kube-apiserver-no-preload-666547
	
	
	==> coredns [c13ef036a1014a6bbc5c179a0889fb6ff615988713589da58240b7c637adf687] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:32801 - 5485 "HINFO IN 1722860781792914362.6159803807488865474. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012158586s
	
	
	==> describe nodes <==
	Name:               no-preload-666547
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-666547
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e8fa5f64d0e7272be43ff25ed3826261f0a2578
	                    minikube.k8s.io/name=no-preload-666547
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_16T03_35_21_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Jan 2024 03:35:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-666547
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Jan 2024 03:57:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Jan 2024 03:54:46 +0000   Tue, 16 Jan 2024 03:35:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Jan 2024 03:54:46 +0000   Tue, 16 Jan 2024 03:35:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Jan 2024 03:54:46 +0000   Tue, 16 Jan 2024 03:35:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Jan 2024 03:54:46 +0000   Tue, 16 Jan 2024 03:44:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.103
	  Hostname:    no-preload-666547
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 9c1e40cee5004b05ba377b950d8ae425
	  System UUID:                9c1e40ce-e500-4b05-ba37-7b950d8ae425
	  Boot ID:                    9cfa70da-65ac-486c-ae2b-6c40e448f263
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.29.0-rc.2
	  Kube-Proxy Version:         v1.29.0-rc.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 coredns-76f75df574-lr95b                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     22m
	  kube-system                 etcd-no-preload-666547                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         22m
	  kube-system                 kube-apiserver-no-preload-666547             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-controller-manager-no-preload-666547    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-proxy-dcmrn                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-scheduler-no-preload-666547             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 metrics-server-57f55c9bc5-78vfj              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         21m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  22m (x8 over 22m)  kubelet          Node no-preload-666547 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m (x8 over 22m)  kubelet          Node no-preload-666547 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m (x7 over 22m)  kubelet          Node no-preload-666547 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     22m                kubelet          Node no-preload-666547 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  22m                kubelet          Node no-preload-666547 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m                kubelet          Node no-preload-666547 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                22m                kubelet          Node no-preload-666547 status is now: NodeReady
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           22m                node-controller  Node no-preload-666547 event: Registered Node no-preload-666547 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node no-preload-666547 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node no-preload-666547 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node no-preload-666547 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node no-preload-666547 event: Registered Node no-preload-666547 in Controller
	
	
	==> dmesg <==
	[Jan16 03:43] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.069486] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.432487] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.513884] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.156831] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.460432] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.499600] systemd-fstab-generator[635]: Ignoring "noauto" for root device
	[  +0.110367] systemd-fstab-generator[646]: Ignoring "noauto" for root device
	[  +0.154550] systemd-fstab-generator[659]: Ignoring "noauto" for root device
	[  +0.129392] systemd-fstab-generator[670]: Ignoring "noauto" for root device
	[  +0.249042] systemd-fstab-generator[694]: Ignoring "noauto" for root device
	[ +29.681110] systemd-fstab-generator[1321]: Ignoring "noauto" for root device
	[Jan16 03:44] kauditd_printk_skb: 19 callbacks suppressed
	[  +1.405784] hrtimer: interrupt took 2824011 ns
	
	
	==> etcd [01aaf51cd40b9137f2395404bb15b7372927f81dc8a4e884600dc8cba9bbeb8e] <==
	{"level":"info","ts":"2024-01-16T03:44:01.976105Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"836b637e1db3e16e","local-member-attributes":"{Name:no-preload-666547 ClientURLs:[https://192.168.39.103:2379]}","request-path":"/0/members/836b637e1db3e16e/attributes","cluster-id":"58a1f21afce1a625","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-16T03:44:01.976134Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-16T03:44:01.976444Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-16T03:44:01.976507Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-16T03:44:01.976149Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-16T03:44:01.980748Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-16T03:44:01.983245Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.103:2379"}
	{"level":"warn","ts":"2024-01-16T03:44:18.740021Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":16244076007415259520,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-01-16T03:44:18.837943Z","caller":"traceutil/trace.go:171","msg":"trace[146986157] transaction","detail":"{read_only:false; response_revision:601; number_of_response:1; }","duration":"825.750902ms","start":"2024-01-16T03:44:18.011551Z","end":"2024-01-16T03:44:18.837302Z","steps":["trace[146986157] 'process raft request'  (duration: 824.840142ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T03:44:18.839048Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-16T03:44:18.011527Z","time spent":"826.602609ms","remote":"127.0.0.1:57432","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5422,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/etcd-no-preload-666547\" mod_revision:499 > success:<request_put:<key:\"/registry/pods/kube-system/etcd-no-preload-666547\" value_size:5365 >> failure:<request_range:<key:\"/registry/pods/kube-system/etcd-no-preload-666547\" > >"}
	{"level":"warn","ts":"2024-01-16T03:44:19.426726Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"461.972682ms","expected-duration":"100ms","prefix":"","request":"header:<ID:16244076007415259522 > lease_revoke:<id:616e8d10566be24b>","response":"size:28"}
	{"level":"info","ts":"2024-01-16T03:44:19.426843Z","caller":"traceutil/trace.go:171","msg":"trace[1835654471] linearizableReadLoop","detail":"{readStateIndex:639; appliedIndex:637; }","duration":"1.187362277s","start":"2024-01-16T03:44:18.239469Z","end":"2024-01-16T03:44:19.426831Z","steps":["trace[1835654471] 'read index received'  (duration: 597.143781ms)","trace[1835654471] 'applied index is now lower than readState.Index'  (duration: 590.217714ms)"],"step_count":2}
	{"level":"warn","ts":"2024-01-16T03:44:19.426959Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.18753464s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-no-preload-666547\" ","response":"range_response_count:1 size:5437"}
	{"level":"info","ts":"2024-01-16T03:44:19.427044Z","caller":"traceutil/trace.go:171","msg":"trace[482028468] range","detail":"{range_begin:/registry/pods/kube-system/etcd-no-preload-666547; range_end:; response_count:1; response_revision:601; }","duration":"1.187668348s","start":"2024-01-16T03:44:18.239368Z","end":"2024-01-16T03:44:19.427036Z","steps":["trace[482028468] 'agreement among raft nodes before linearized reading'  (duration: 1.187533625s)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T03:44:19.427073Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-16T03:44:18.239353Z","time spent":"1.187712721s","remote":"127.0.0.1:57432","response type":"/etcdserverpb.KV/Range","request count":0,"request size":51,"response count":1,"response size":5460,"request content":"key:\"/registry/pods/kube-system/etcd-no-preload-666547\" "}
	{"level":"warn","ts":"2024-01-16T03:44:19.427212Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"582.552768ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-16T03:44:19.427259Z","caller":"traceutil/trace.go:171","msg":"trace[1839323139] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:601; }","duration":"582.635465ms","start":"2024-01-16T03:44:18.844613Z","end":"2024-01-16T03:44:19.427248Z","steps":["trace[1839323139] 'agreement among raft nodes before linearized reading'  (duration: 582.545923ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T03:44:19.427285Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-16T03:44:18.844596Z","time spent":"582.683283ms","remote":"127.0.0.1:57382","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2024-01-16T03:44:19.935524Z","caller":"traceutil/trace.go:171","msg":"trace[1490785773] transaction","detail":"{read_only:false; response_revision:602; number_of_response:1; }","duration":"211.600552ms","start":"2024-01-16T03:44:19.723903Z","end":"2024-01-16T03:44:19.935504Z","steps":["trace[1490785773] 'process raft request'  (duration: 211.496267ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T03:44:20.248791Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"215.881473ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/no-preload-666547\" ","response":"range_response_count:1 size:4441"}
	{"level":"info","ts":"2024-01-16T03:44:20.248957Z","caller":"traceutil/trace.go:171","msg":"trace[830674530] range","detail":"{range_begin:/registry/minions/no-preload-666547; range_end:; response_count:1; response_revision:602; }","duration":"216.061167ms","start":"2024-01-16T03:44:20.032879Z","end":"2024-01-16T03:44:20.24894Z","steps":["trace[830674530] 'range keys from in-memory index tree'  (duration: 215.769347ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-16T03:44:42.348184Z","caller":"traceutil/trace.go:171","msg":"trace[1812864166] transaction","detail":"{read_only:false; response_revision:627; number_of_response:1; }","duration":"201.73351ms","start":"2024-01-16T03:44:42.146423Z","end":"2024-01-16T03:44:42.348156Z","steps":["trace[1812864166] 'process raft request'  (duration: 201.486143ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-16T03:54:02.029354Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":856}
	{"level":"info","ts":"2024-01-16T03:54:02.033764Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":856,"took":"3.283065ms","hash":4264023929}
	{"level":"info","ts":"2024-01-16T03:54:02.033922Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4264023929,"revision":856,"compact-revision":-1}
	
	
	==> kernel <==
	 03:57:34 up 14 min,  0 users,  load average: 0.32, 0.22, 0.14
	Linux no-preload-666547 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [de79f87bc28449077936386f603cc5df933f63455bc96cf6ff7e7c6e4bd32bc4] <==
	I0116 03:52:04.509821       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0116 03:54:03.512438       1 handler_proxy.go:93] no RequestInfo found in the context
	E0116 03:54:03.512818       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0116 03:54:04.513410       1 handler_proxy.go:93] no RequestInfo found in the context
	E0116 03:54:04.513523       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0116 03:54:04.513551       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0116 03:54:04.513673       1 handler_proxy.go:93] no RequestInfo found in the context
	E0116 03:54:04.513753       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0116 03:54:04.514727       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0116 03:55:04.513906       1 handler_proxy.go:93] no RequestInfo found in the context
	E0116 03:55:04.514102       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0116 03:55:04.514121       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0116 03:55:04.515423       1 handler_proxy.go:93] no RequestInfo found in the context
	E0116 03:55:04.515538       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0116 03:55:04.515550       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0116 03:57:04.514661       1 handler_proxy.go:93] no RequestInfo found in the context
	E0116 03:57:04.514775       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0116 03:57:04.514790       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0116 03:57:04.516110       1 handler_proxy.go:93] no RequestInfo found in the context
	E0116 03:57:04.516237       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0116 03:57:04.516250       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [802d4c55aa04370ff079f09dfe4670f5dac1142ca6db880afa17be75f1d20a76] <==
	I0116 03:51:47.087751       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 03:52:16.558795       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:52:17.099922       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 03:52:46.565350       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:52:47.108533       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 03:53:16.572894       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:53:17.117249       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 03:53:46.578318       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:53:47.127145       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 03:54:16.590392       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:54:17.136504       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 03:54:46.596156       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:54:47.149738       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 03:55:16.603279       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:55:17.158154       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0116 03:55:24.371586       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="294.07µs"
	I0116 03:55:39.371737       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="211.683µs"
	E0116 03:55:46.609728       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:55:47.167277       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 03:56:16.622636       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:56:17.176229       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 03:56:46.628872       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:56:47.185318       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 03:57:16.634950       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:57:17.193639       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [eba2964f029ac61d9cfb59dc548ae05c832f9b23713f51924291fbfc5de985ed] <==
	I0116 03:44:05.509703       1 server_others.go:72] "Using iptables proxy"
	I0116 03:44:05.584745       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.103"]
	I0116 03:44:05.724180       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0116 03:44:05.724250       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0116 03:44:05.724277       1 server_others.go:168] "Using iptables Proxier"
	I0116 03:44:05.736231       1 proxier.go:246] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0116 03:44:05.737555       1 server.go:865] "Version info" version="v1.29.0-rc.2"
	I0116 03:44:05.737921       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0116 03:44:05.739244       1 config.go:188] "Starting service config controller"
	I0116 03:44:05.740540       1 config.go:97] "Starting endpoint slice config controller"
	I0116 03:44:05.740676       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0116 03:44:05.742500       1 config.go:315] "Starting node config controller"
	I0116 03:44:05.742647       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0116 03:44:05.743524       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0116 03:44:05.746050       1 shared_informer.go:318] Caches are synced for service config
	I0116 03:44:05.841519       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0116 03:44:05.843188       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [33381edd7dded1b5ed158738429b4a7f1f03a6171f2af0832bb5a9864c950725] <==
	I0116 03:44:00.780699       1 serving.go:380] Generated self-signed cert in-memory
	W0116 03:44:03.501740       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0116 03:44:03.508832       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0116 03:44:03.509405       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0116 03:44:03.509440       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0116 03:44:03.552526       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.0-rc.2"
	I0116 03:44:03.553299       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0116 03:44:03.556538       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0116 03:44:03.556713       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0116 03:44:03.557413       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0116 03:44:03.557514       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0116 03:44:03.657325       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-01-16 03:43:14 UTC, ends at Tue 2024-01-16 03:57:34 UTC. --
	Jan 16 03:54:57 no-preload-666547 kubelet[1327]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 16 03:54:57 no-preload-666547 kubelet[1327]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 16 03:55:09 no-preload-666547 kubelet[1327]: E0116 03:55:09.365422    1327 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 16 03:55:09 no-preload-666547 kubelet[1327]: E0116 03:55:09.366074    1327 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 16 03:55:09 no-preload-666547 kubelet[1327]: E0116 03:55:09.366916    1327 kuberuntime_manager.go:1262] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-xxw6h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Pro
beHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:Fi
le,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-78vfj_kube-system(dbd2d3b2-ec0f-4253-8549-7c4299522c37): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 16 03:55:09 no-preload-666547 kubelet[1327]: E0116 03:55:09.367248    1327 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-78vfj" podUID="dbd2d3b2-ec0f-4253-8549-7c4299522c37"
	Jan 16 03:55:24 no-preload-666547 kubelet[1327]: E0116 03:55:24.351219    1327 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-78vfj" podUID="dbd2d3b2-ec0f-4253-8549-7c4299522c37"
	Jan 16 03:55:39 no-preload-666547 kubelet[1327]: E0116 03:55:39.352256    1327 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-78vfj" podUID="dbd2d3b2-ec0f-4253-8549-7c4299522c37"
	Jan 16 03:55:51 no-preload-666547 kubelet[1327]: E0116 03:55:51.351800    1327 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-78vfj" podUID="dbd2d3b2-ec0f-4253-8549-7c4299522c37"
	Jan 16 03:55:57 no-preload-666547 kubelet[1327]: E0116 03:55:57.376393    1327 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 16 03:55:57 no-preload-666547 kubelet[1327]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 16 03:55:57 no-preload-666547 kubelet[1327]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 16 03:55:57 no-preload-666547 kubelet[1327]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 16 03:56:02 no-preload-666547 kubelet[1327]: E0116 03:56:02.350635    1327 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-78vfj" podUID="dbd2d3b2-ec0f-4253-8549-7c4299522c37"
	Jan 16 03:56:14 no-preload-666547 kubelet[1327]: E0116 03:56:14.353448    1327 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-78vfj" podUID="dbd2d3b2-ec0f-4253-8549-7c4299522c37"
	Jan 16 03:56:26 no-preload-666547 kubelet[1327]: E0116 03:56:26.350626    1327 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-78vfj" podUID="dbd2d3b2-ec0f-4253-8549-7c4299522c37"
	Jan 16 03:56:41 no-preload-666547 kubelet[1327]: E0116 03:56:41.351472    1327 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-78vfj" podUID="dbd2d3b2-ec0f-4253-8549-7c4299522c37"
	Jan 16 03:56:52 no-preload-666547 kubelet[1327]: E0116 03:56:52.351536    1327 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-78vfj" podUID="dbd2d3b2-ec0f-4253-8549-7c4299522c37"
	Jan 16 03:56:57 no-preload-666547 kubelet[1327]: E0116 03:56:57.375353    1327 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 16 03:56:57 no-preload-666547 kubelet[1327]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 16 03:56:57 no-preload-666547 kubelet[1327]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 16 03:56:57 no-preload-666547 kubelet[1327]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 16 03:57:07 no-preload-666547 kubelet[1327]: E0116 03:57:07.351504    1327 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-78vfj" podUID="dbd2d3b2-ec0f-4253-8549-7c4299522c37"
	Jan 16 03:57:18 no-preload-666547 kubelet[1327]: E0116 03:57:18.350464    1327 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-78vfj" podUID="dbd2d3b2-ec0f-4253-8549-7c4299522c37"
	Jan 16 03:57:33 no-preload-666547 kubelet[1327]: E0116 03:57:33.352259    1327 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-78vfj" podUID="dbd2d3b2-ec0f-4253-8549-7c4299522c37"
	
	
	==> storage-provisioner [59754e94eb3cf3fecfedb9077fbad2ecc2618c351fa9bfa3faed0d780fdd9ecb] <==
	I0116 03:44:05.591829       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0116 03:44:05.600646       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [b7164c1b7732cf6d54642cb9ffc8d9f16696adc0e428c3fa16cf914420d73e1d] <==
	I0116 03:44:06.558064       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0116 03:44:06.578351       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0116 03:44:06.578637       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0116 03:44:23.994278       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0116 03:44:23.996617       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-666547_13a9b6af-a490-4224-8262-906d79382357!
	I0116 03:44:23.994551       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"46fad042-e0ea-4026-b131-dabb6c9f6332", APIVersion:"v1", ResourceVersion:"607", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-666547_13a9b6af-a490-4224-8262-906d79382357 became leader
	I0116 03:44:24.097641       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-666547_13a9b6af-a490-4224-8262-906d79382357!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-666547 -n no-preload-666547
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-666547 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-78vfj
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-666547 describe pod metrics-server-57f55c9bc5-78vfj
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-666547 describe pod metrics-server-57f55c9bc5-78vfj: exit status 1 (80.550017ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-78vfj" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-666547 describe pod metrics-server-57f55c9bc5-78vfj: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (543.56s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (543.54s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0116 03:49:18.182978  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/ingress-addon-legacy-873808/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-434445 -n default-k8s-diff-port-434445
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-01-16 03:58:05.290668558 +0000 UTC m=+5029.228891176
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-434445 -n default-k8s-diff-port-434445
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-434445 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-434445 logs -n 25: (1.797709569s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p no-preload-666547                                   | no-preload-666547            | jenkins | v1.32.0 | 16 Jan 24 03:33 UTC | 16 Jan 24 03:35 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| ssh     | cert-options-977008 ssh                                | cert-options-977008          | jenkins | v1.32.0 | 16 Jan 24 03:34 UTC | 16 Jan 24 03:34 UTC |
	|         | openssl x509 -text -noout -in                          |                              |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                              |         |         |                     |                     |
	| ssh     | -p cert-options-977008 -- sudo                         | cert-options-977008          | jenkins | v1.32.0 | 16 Jan 24 03:34 UTC | 16 Jan 24 03:34 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-977008                                 | cert-options-977008          | jenkins | v1.32.0 | 16 Jan 24 03:34 UTC | 16 Jan 24 03:34 UTC |
	| start   | -p embed-certs-615980                                  | embed-certs-615980           | jenkins | v1.32.0 | 16 Jan 24 03:34 UTC | 16 Jan 24 03:35 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-690771                              | cert-expiration-690771       | jenkins | v1.32.0 | 16 Jan 24 03:35 UTC | 16 Jan 24 03:36 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-615980            | embed-certs-615980           | jenkins | v1.32.0 | 16 Jan 24 03:35 UTC | 16 Jan 24 03:35 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-615980                                  | embed-certs-615980           | jenkins | v1.32.0 | 16 Jan 24 03:35 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-666547             | no-preload-666547            | jenkins | v1.32.0 | 16 Jan 24 03:35 UTC | 16 Jan 24 03:35 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-666547                                   | no-preload-666547            | jenkins | v1.32.0 | 16 Jan 24 03:35 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-696770        | old-k8s-version-696770       | jenkins | v1.32.0 | 16 Jan 24 03:36 UTC | 16 Jan 24 03:36 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-696770                              | old-k8s-version-696770       | jenkins | v1.32.0 | 16 Jan 24 03:36 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-690771                              | cert-expiration-690771       | jenkins | v1.32.0 | 16 Jan 24 03:36 UTC | 16 Jan 24 03:36 UTC |
	| delete  | -p                                                     | disable-driver-mounts-673948 | jenkins | v1.32.0 | 16 Jan 24 03:36 UTC | 16 Jan 24 03:36 UTC |
	|         | disable-driver-mounts-673948                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-434445 | jenkins | v1.32.0 | 16 Jan 24 03:36 UTC | 16 Jan 24 03:37 UTC |
	|         | default-k8s-diff-port-434445                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-434445  | default-k8s-diff-port-434445 | jenkins | v1.32.0 | 16 Jan 24 03:37 UTC | 16 Jan 24 03:37 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-434445 | jenkins | v1.32.0 | 16 Jan 24 03:37 UTC |                     |
	|         | default-k8s-diff-port-434445                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-615980                 | embed-certs-615980           | jenkins | v1.32.0 | 16 Jan 24 03:38 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-666547                  | no-preload-666547            | jenkins | v1.32.0 | 16 Jan 24 03:38 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-615980                                  | embed-certs-615980           | jenkins | v1.32.0 | 16 Jan 24 03:38 UTC | 16 Jan 24 03:49 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p no-preload-666547                                   | no-preload-666547            | jenkins | v1.32.0 | 16 Jan 24 03:38 UTC | 16 Jan 24 03:48 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-696770             | old-k8s-version-696770       | jenkins | v1.32.0 | 16 Jan 24 03:38 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-696770                              | old-k8s-version-696770       | jenkins | v1.32.0 | 16 Jan 24 03:38 UTC | 16 Jan 24 03:51 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-434445       | default-k8s-diff-port-434445 | jenkins | v1.32.0 | 16 Jan 24 03:40 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-434445 | jenkins | v1.32.0 | 16 Jan 24 03:40 UTC | 16 Jan 24 03:49 UTC |
	|         | default-k8s-diff-port-434445                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/16 03:40:16
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0116 03:40:16.605622  507889 out.go:296] Setting OutFile to fd 1 ...
	I0116 03:40:16.605883  507889 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:40:16.605892  507889 out.go:309] Setting ErrFile to fd 2...
	I0116 03:40:16.605897  507889 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:40:16.606102  507889 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17965-468241/.minikube/bin
	I0116 03:40:16.606721  507889 out.go:303] Setting JSON to false
	I0116 03:40:16.607781  507889 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":15769,"bootTime":1705360648,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0116 03:40:16.607865  507889 start.go:138] virtualization: kvm guest
	I0116 03:40:16.610269  507889 out.go:177] * [default-k8s-diff-port-434445] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0116 03:40:16.611862  507889 out.go:177]   - MINIKUBE_LOCATION=17965
	I0116 03:40:16.611954  507889 notify.go:220] Checking for updates...
	I0116 03:40:16.613306  507889 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 03:40:16.615094  507889 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17965-468241/kubeconfig
	I0116 03:40:16.617044  507889 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17965-468241/.minikube
	I0116 03:40:16.618932  507889 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0116 03:40:16.621159  507889 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0116 03:40:16.623616  507889 config.go:182] Loaded profile config "default-k8s-diff-port-434445": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 03:40:16.624273  507889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:40:16.624363  507889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:40:16.640065  507889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45633
	I0116 03:40:16.640642  507889 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:40:16.641273  507889 main.go:141] libmachine: Using API Version  1
	I0116 03:40:16.641301  507889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:40:16.641696  507889 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:40:16.641901  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .DriverName
	I0116 03:40:16.642227  507889 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 03:40:16.642599  507889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:40:16.642684  507889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:40:16.658198  507889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41877
	I0116 03:40:16.658657  507889 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:40:16.659207  507889 main.go:141] libmachine: Using API Version  1
	I0116 03:40:16.659233  507889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:40:16.659588  507889 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:40:16.659844  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .DriverName
	I0116 03:40:16.698770  507889 out.go:177] * Using the kvm2 driver based on existing profile
	I0116 03:40:16.700307  507889 start.go:298] selected driver: kvm2
	I0116 03:40:16.700330  507889 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-434445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-434445 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.50.236 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 03:40:16.700478  507889 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0116 03:40:16.701296  507889 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 03:40:16.701389  507889 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17965-468241/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0116 03:40:16.717988  507889 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0116 03:40:16.718426  507889 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0116 03:40:16.718515  507889 cni.go:84] Creating CNI manager for ""
	I0116 03:40:16.718532  507889 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:40:16.718547  507889 start_flags.go:321] config:
	{Name:default-k8s-diff-port-434445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-43444
5 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.50.236 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 03:40:16.718765  507889 iso.go:125] acquiring lock: {Name:mk2f3231f0eeeb23816ac363851489181398d1c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 03:40:16.721292  507889 out.go:177] * Starting control plane node default-k8s-diff-port-434445 in cluster default-k8s-diff-port-434445
	I0116 03:40:16.722858  507889 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 03:40:16.722928  507889 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0116 03:40:16.722942  507889 cache.go:56] Caching tarball of preloaded images
	I0116 03:40:16.723044  507889 preload.go:174] Found /home/jenkins/minikube-integration/17965-468241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0116 03:40:16.723057  507889 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0116 03:40:16.723243  507889 profile.go:148] Saving config to /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/default-k8s-diff-port-434445/config.json ...
	I0116 03:40:16.723502  507889 start.go:365] acquiring machines lock for default-k8s-diff-port-434445: {Name:mk901e5fae8c1a578d989b520053c709bdbdcf06 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0116 03:40:22.140399  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:40:25.212385  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:40:31.292386  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:40:34.364375  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:40:40.444398  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:40:43.516372  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:40:49.596388  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:40:52.668394  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:40:58.748342  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:41:01.820436  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:41:07.900338  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:41:10.972410  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:41:17.052384  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:41:20.124427  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:41:26.204371  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:41:29.276361  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:41:35.356391  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:41:38.428383  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:41:44.508324  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:41:47.580377  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:41:53.660360  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:41:56.732377  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:42:02.812345  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:42:05.884406  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:42:11.964398  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:42:15.036469  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:42:21.116391  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:42:24.188397  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:42:30.268400  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:42:33.340416  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:42:39.420405  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:42:42.492396  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:42:48.572396  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:42:51.644367  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:42:57.724419  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:43:00.796427  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:43:03.800669  507339 start.go:369] acquired machines lock for "no-preload-666547" in 4m33.073406767s
	I0116 03:43:03.800732  507339 start.go:96] Skipping create...Using existing machine configuration
	I0116 03:43:03.800744  507339 fix.go:54] fixHost starting: 
	I0116 03:43:03.801330  507339 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:43:03.801381  507339 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:43:03.817309  507339 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46189
	I0116 03:43:03.817841  507339 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:43:03.818376  507339 main.go:141] libmachine: Using API Version  1
	I0116 03:43:03.818403  507339 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:43:03.818801  507339 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:43:03.819049  507339 main.go:141] libmachine: (no-preload-666547) Calling .DriverName
	I0116 03:43:03.819206  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetState
	I0116 03:43:03.821006  507339 fix.go:102] recreateIfNeeded on no-preload-666547: state=Stopped err=<nil>
	I0116 03:43:03.821031  507339 main.go:141] libmachine: (no-preload-666547) Calling .DriverName
	W0116 03:43:03.821210  507339 fix.go:128] unexpected machine state, will restart: <nil>
	I0116 03:43:03.823341  507339 out.go:177] * Restarting existing kvm2 VM for "no-preload-666547" ...
	I0116 03:43:03.824887  507339 main.go:141] libmachine: (no-preload-666547) Calling .Start
	I0116 03:43:03.825070  507339 main.go:141] libmachine: (no-preload-666547) Ensuring networks are active...
	I0116 03:43:03.825806  507339 main.go:141] libmachine: (no-preload-666547) Ensuring network default is active
	I0116 03:43:03.826151  507339 main.go:141] libmachine: (no-preload-666547) Ensuring network mk-no-preload-666547 is active
	I0116 03:43:03.826549  507339 main.go:141] libmachine: (no-preload-666547) Getting domain xml...
	I0116 03:43:03.827209  507339 main.go:141] libmachine: (no-preload-666547) Creating domain...
	I0116 03:43:04.166757  507339 main.go:141] libmachine: (no-preload-666547) Waiting to get IP...
	I0116 03:43:04.167846  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:04.168294  507339 main.go:141] libmachine: (no-preload-666547) DBG | unable to find current IP address of domain no-preload-666547 in network mk-no-preload-666547
	I0116 03:43:04.168400  507339 main.go:141] libmachine: (no-preload-666547) DBG | I0116 03:43:04.168281  508330 retry.go:31] will retry after 236.684347ms: waiting for machine to come up
	I0116 03:43:04.407068  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:04.407590  507339 main.go:141] libmachine: (no-preload-666547) DBG | unable to find current IP address of domain no-preload-666547 in network mk-no-preload-666547
	I0116 03:43:04.407626  507339 main.go:141] libmachine: (no-preload-666547) DBG | I0116 03:43:04.407520  508330 retry.go:31] will retry after 273.512454ms: waiting for machine to come up
	I0116 03:43:04.683173  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:04.683724  507339 main.go:141] libmachine: (no-preload-666547) DBG | unable to find current IP address of domain no-preload-666547 in network mk-no-preload-666547
	I0116 03:43:04.683759  507339 main.go:141] libmachine: (no-preload-666547) DBG | I0116 03:43:04.683652  508330 retry.go:31] will retry after 404.396132ms: waiting for machine to come up
	I0116 03:43:05.089306  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:05.089659  507339 main.go:141] libmachine: (no-preload-666547) DBG | unable to find current IP address of domain no-preload-666547 in network mk-no-preload-666547
	I0116 03:43:05.089687  507339 main.go:141] libmachine: (no-preload-666547) DBG | I0116 03:43:05.089612  508330 retry.go:31] will retry after 373.291662ms: waiting for machine to come up
	I0116 03:43:05.464216  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:05.464745  507339 main.go:141] libmachine: (no-preload-666547) DBG | unable to find current IP address of domain no-preload-666547 in network mk-no-preload-666547
	I0116 03:43:05.464772  507339 main.go:141] libmachine: (no-preload-666547) DBG | I0116 03:43:05.464696  508330 retry.go:31] will retry after 509.048348ms: waiting for machine to come up
	I0116 03:43:03.798483  507257 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 03:43:03.798553  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHHostname
	I0116 03:43:03.800507  507257 machine.go:91] provisioned docker machine in 4m37.39429533s
	I0116 03:43:03.800559  507257 fix.go:56] fixHost completed within 4m37.41769564s
	I0116 03:43:03.800568  507257 start.go:83] releasing machines lock for "embed-certs-615980", held for 4m37.417718822s
	W0116 03:43:03.800599  507257 start.go:694] error starting host: provision: host is not running
	W0116 03:43:03.800747  507257 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0116 03:43:03.800759  507257 start.go:709] Will try again in 5 seconds ...
	I0116 03:43:05.975369  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:05.975831  507339 main.go:141] libmachine: (no-preload-666547) DBG | unable to find current IP address of domain no-preload-666547 in network mk-no-preload-666547
	I0116 03:43:05.975864  507339 main.go:141] libmachine: (no-preload-666547) DBG | I0116 03:43:05.975776  508330 retry.go:31] will retry after 631.077965ms: waiting for machine to come up
	I0116 03:43:06.608722  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:06.609133  507339 main.go:141] libmachine: (no-preload-666547) DBG | unable to find current IP address of domain no-preload-666547 in network mk-no-preload-666547
	I0116 03:43:06.609162  507339 main.go:141] libmachine: (no-preload-666547) DBG | I0116 03:43:06.609074  508330 retry.go:31] will retry after 1.047586363s: waiting for machine to come up
	I0116 03:43:07.658264  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:07.658645  507339 main.go:141] libmachine: (no-preload-666547) DBG | unable to find current IP address of domain no-preload-666547 in network mk-no-preload-666547
	I0116 03:43:07.658696  507339 main.go:141] libmachine: (no-preload-666547) DBG | I0116 03:43:07.658591  508330 retry.go:31] will retry after 1.038644854s: waiting for machine to come up
	I0116 03:43:08.698946  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:08.699384  507339 main.go:141] libmachine: (no-preload-666547) DBG | unable to find current IP address of domain no-preload-666547 in network mk-no-preload-666547
	I0116 03:43:08.699411  507339 main.go:141] libmachine: (no-preload-666547) DBG | I0116 03:43:08.699347  508330 retry.go:31] will retry after 1.362032973s: waiting for machine to come up
	I0116 03:43:10.063269  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:10.063764  507339 main.go:141] libmachine: (no-preload-666547) DBG | unable to find current IP address of domain no-preload-666547 in network mk-no-preload-666547
	I0116 03:43:10.063792  507339 main.go:141] libmachine: (no-preload-666547) DBG | I0116 03:43:10.063714  508330 retry.go:31] will retry after 1.432317286s: waiting for machine to come up
	I0116 03:43:08.802803  507257 start.go:365] acquiring machines lock for embed-certs-615980: {Name:mk901e5fae8c1a578d989b520053c709bdbdcf06 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0116 03:43:11.498235  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:11.498714  507339 main.go:141] libmachine: (no-preload-666547) DBG | unable to find current IP address of domain no-preload-666547 in network mk-no-preload-666547
	I0116 03:43:11.498748  507339 main.go:141] libmachine: (no-preload-666547) DBG | I0116 03:43:11.498650  508330 retry.go:31] will retry after 2.490630326s: waiting for machine to come up
	I0116 03:43:13.991256  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:13.991717  507339 main.go:141] libmachine: (no-preload-666547) DBG | unable to find current IP address of domain no-preload-666547 in network mk-no-preload-666547
	I0116 03:43:13.991752  507339 main.go:141] libmachine: (no-preload-666547) DBG | I0116 03:43:13.991662  508330 retry.go:31] will retry after 3.569049736s: waiting for machine to come up
	I0116 03:43:17.565524  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:17.565893  507339 main.go:141] libmachine: (no-preload-666547) DBG | unable to find current IP address of domain no-preload-666547 in network mk-no-preload-666547
	I0116 03:43:17.565916  507339 main.go:141] libmachine: (no-preload-666547) DBG | I0116 03:43:17.565850  508330 retry.go:31] will retry after 2.875259098s: waiting for machine to come up
	I0116 03:43:20.443998  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:20.444472  507339 main.go:141] libmachine: (no-preload-666547) DBG | unable to find current IP address of domain no-preload-666547 in network mk-no-preload-666547
	I0116 03:43:20.444495  507339 main.go:141] libmachine: (no-preload-666547) DBG | I0116 03:43:20.444438  508330 retry.go:31] will retry after 4.319647454s: waiting for machine to come up
	I0116 03:43:24.765311  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:24.765836  507339 main.go:141] libmachine: (no-preload-666547) Found IP for machine: 192.168.39.103
	I0116 03:43:24.765862  507339 main.go:141] libmachine: (no-preload-666547) Reserving static IP address...
	I0116 03:43:24.765879  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has current primary IP address 192.168.39.103 and MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:24.766413  507339 main.go:141] libmachine: (no-preload-666547) DBG | found host DHCP lease matching {name: "no-preload-666547", mac: "52:54:00:4e:5f:03", ip: "192.168.39.103"} in network mk-no-preload-666547: {Iface:virbr2 ExpiryTime:2024-01-16 04:43:15 +0000 UTC Type:0 Mac:52:54:00:4e:5f:03 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:no-preload-666547 Clientid:01:52:54:00:4e:5f:03}
	I0116 03:43:24.766543  507339 main.go:141] libmachine: (no-preload-666547) Reserved static IP address: 192.168.39.103
	I0116 03:43:24.766575  507339 main.go:141] libmachine: (no-preload-666547) DBG | skip adding static IP to network mk-no-preload-666547 - found existing host DHCP lease matching {name: "no-preload-666547", mac: "52:54:00:4e:5f:03", ip: "192.168.39.103"}
	I0116 03:43:24.766593  507339 main.go:141] libmachine: (no-preload-666547) DBG | Getting to WaitForSSH function...
	I0116 03:43:24.766607  507339 main.go:141] libmachine: (no-preload-666547) Waiting for SSH to be available...
	I0116 03:43:24.768801  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:24.769145  507339 main.go:141] libmachine: (no-preload-666547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:5f:03", ip: ""} in network mk-no-preload-666547: {Iface:virbr2 ExpiryTime:2024-01-16 04:43:15 +0000 UTC Type:0 Mac:52:54:00:4e:5f:03 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:no-preload-666547 Clientid:01:52:54:00:4e:5f:03}
	I0116 03:43:24.769180  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined IP address 192.168.39.103 and MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:24.769392  507339 main.go:141] libmachine: (no-preload-666547) DBG | Using SSH client type: external
	I0116 03:43:24.769446  507339 main.go:141] libmachine: (no-preload-666547) DBG | Using SSH private key: /home/jenkins/minikube-integration/17965-468241/.minikube/machines/no-preload-666547/id_rsa (-rw-------)
	I0116 03:43:24.769490  507339 main.go:141] libmachine: (no-preload-666547) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.103 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17965-468241/.minikube/machines/no-preload-666547/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0116 03:43:24.769539  507339 main.go:141] libmachine: (no-preload-666547) DBG | About to run SSH command:
	I0116 03:43:24.769557  507339 main.go:141] libmachine: (no-preload-666547) DBG | exit 0
	I0116 03:43:24.860928  507339 main.go:141] libmachine: (no-preload-666547) DBG | SSH cmd err, output: <nil>: 
	I0116 03:43:24.861324  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetConfigRaw
	I0116 03:43:24.862217  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetIP
	I0116 03:43:24.865100  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:24.865468  507339 main.go:141] libmachine: (no-preload-666547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:5f:03", ip: ""} in network mk-no-preload-666547: {Iface:virbr2 ExpiryTime:2024-01-16 04:43:15 +0000 UTC Type:0 Mac:52:54:00:4e:5f:03 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:no-preload-666547 Clientid:01:52:54:00:4e:5f:03}
	I0116 03:43:24.865503  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined IP address 192.168.39.103 and MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:24.865804  507339 profile.go:148] Saving config to /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/no-preload-666547/config.json ...
	I0116 03:43:24.866064  507339 machine.go:88] provisioning docker machine ...
	I0116 03:43:24.866091  507339 main.go:141] libmachine: (no-preload-666547) Calling .DriverName
	I0116 03:43:24.866374  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetMachineName
	I0116 03:43:24.866590  507339 buildroot.go:166] provisioning hostname "no-preload-666547"
	I0116 03:43:24.866613  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetMachineName
	I0116 03:43:24.866795  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHHostname
	I0116 03:43:24.869231  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:24.869587  507339 main.go:141] libmachine: (no-preload-666547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:5f:03", ip: ""} in network mk-no-preload-666547: {Iface:virbr2 ExpiryTime:2024-01-16 04:43:15 +0000 UTC Type:0 Mac:52:54:00:4e:5f:03 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:no-preload-666547 Clientid:01:52:54:00:4e:5f:03}
	I0116 03:43:24.869623  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined IP address 192.168.39.103 and MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:24.869778  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHPort
	I0116 03:43:24.870002  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHKeyPath
	I0116 03:43:24.870168  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHKeyPath
	I0116 03:43:24.870304  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHUsername
	I0116 03:43:24.870455  507339 main.go:141] libmachine: Using SSH client type: native
	I0116 03:43:24.870929  507339 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I0116 03:43:24.870949  507339 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-666547 && echo "no-preload-666547" | sudo tee /etc/hostname
	I0116 03:43:25.005390  507339 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-666547
	
	I0116 03:43:25.005425  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHHostname
	I0116 03:43:25.008410  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:25.008801  507339 main.go:141] libmachine: (no-preload-666547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:5f:03", ip: ""} in network mk-no-preload-666547: {Iface:virbr2 ExpiryTime:2024-01-16 04:43:15 +0000 UTC Type:0 Mac:52:54:00:4e:5f:03 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:no-preload-666547 Clientid:01:52:54:00:4e:5f:03}
	I0116 03:43:25.008836  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined IP address 192.168.39.103 and MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:25.009007  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHPort
	I0116 03:43:25.009269  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHKeyPath
	I0116 03:43:25.009432  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHKeyPath
	I0116 03:43:25.009561  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHUsername
	I0116 03:43:25.009722  507339 main.go:141] libmachine: Using SSH client type: native
	I0116 03:43:25.010051  507339 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I0116 03:43:25.010071  507339 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-666547' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-666547/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-666547' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 03:43:25.142889  507339 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 03:43:25.142928  507339 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17965-468241/.minikube CaCertPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17965-468241/.minikube}
	I0116 03:43:25.142950  507339 buildroot.go:174] setting up certificates
	I0116 03:43:25.142963  507339 provision.go:83] configureAuth start
	I0116 03:43:25.142973  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetMachineName
	I0116 03:43:25.143294  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetIP
	I0116 03:43:25.146355  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:25.146746  507339 main.go:141] libmachine: (no-preload-666547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:5f:03", ip: ""} in network mk-no-preload-666547: {Iface:virbr2 ExpiryTime:2024-01-16 04:43:15 +0000 UTC Type:0 Mac:52:54:00:4e:5f:03 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:no-preload-666547 Clientid:01:52:54:00:4e:5f:03}
	I0116 03:43:25.146767  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined IP address 192.168.39.103 and MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:25.147063  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHHostname
	I0116 03:43:25.149867  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:25.150231  507339 main.go:141] libmachine: (no-preload-666547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:5f:03", ip: ""} in network mk-no-preload-666547: {Iface:virbr2 ExpiryTime:2024-01-16 04:43:15 +0000 UTC Type:0 Mac:52:54:00:4e:5f:03 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:no-preload-666547 Clientid:01:52:54:00:4e:5f:03}
	I0116 03:43:25.150260  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined IP address 192.168.39.103 and MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:25.150448  507339 provision.go:138] copyHostCerts
	I0116 03:43:25.150531  507339 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem, removing ...
	I0116 03:43:25.150543  507339 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem
	I0116 03:43:25.150618  507339 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem (1123 bytes)
	I0116 03:43:25.150719  507339 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem, removing ...
	I0116 03:43:25.150729  507339 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem
	I0116 03:43:25.150755  507339 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem (1679 bytes)
	I0116 03:43:25.150815  507339 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem, removing ...
	I0116 03:43:25.150823  507339 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem
	I0116 03:43:25.150843  507339 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem (1078 bytes)
	I0116 03:43:25.150888  507339 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca-key.pem org=jenkins.no-preload-666547 san=[192.168.39.103 192.168.39.103 localhost 127.0.0.1 minikube no-preload-666547]
	I0116 03:43:25.417982  507339 provision.go:172] copyRemoteCerts
	I0116 03:43:25.418059  507339 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 03:43:25.418088  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHHostname
	I0116 03:43:25.420908  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:25.421196  507339 main.go:141] libmachine: (no-preload-666547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:5f:03", ip: ""} in network mk-no-preload-666547: {Iface:virbr2 ExpiryTime:2024-01-16 04:43:15 +0000 UTC Type:0 Mac:52:54:00:4e:5f:03 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:no-preload-666547 Clientid:01:52:54:00:4e:5f:03}
	I0116 03:43:25.421235  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined IP address 192.168.39.103 and MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:25.421372  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHPort
	I0116 03:43:25.421609  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHKeyPath
	I0116 03:43:25.421782  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHUsername
	I0116 03:43:25.421952  507339 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/no-preload-666547/id_rsa Username:docker}
	I0116 03:43:25.509876  507339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0116 03:43:25.534885  507339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0116 03:43:25.562593  507339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0116 03:43:25.590106  507339 provision.go:86] duration metric: configureAuth took 447.124389ms
	I0116 03:43:25.590145  507339 buildroot.go:189] setting minikube options for container-runtime
	I0116 03:43:25.590386  507339 config.go:182] Loaded profile config "no-preload-666547": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0116 03:43:25.590475  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHHostname
	I0116 03:43:25.593695  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:25.594125  507339 main.go:141] libmachine: (no-preload-666547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:5f:03", ip: ""} in network mk-no-preload-666547: {Iface:virbr2 ExpiryTime:2024-01-16 04:43:15 +0000 UTC Type:0 Mac:52:54:00:4e:5f:03 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:no-preload-666547 Clientid:01:52:54:00:4e:5f:03}
	I0116 03:43:25.594182  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined IP address 192.168.39.103 and MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:25.594407  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHPort
	I0116 03:43:25.594661  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHKeyPath
	I0116 03:43:25.594851  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHKeyPath
	I0116 03:43:25.595124  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHUsername
	I0116 03:43:25.595362  507339 main.go:141] libmachine: Using SSH client type: native
	I0116 03:43:25.595735  507339 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I0116 03:43:25.595753  507339 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 03:43:26.177541  507510 start.go:369] acquired machines lock for "old-k8s-version-696770" in 4m36.503560035s
	I0116 03:43:26.177612  507510 start.go:96] Skipping create...Using existing machine configuration
	I0116 03:43:26.177621  507510 fix.go:54] fixHost starting: 
	I0116 03:43:26.178073  507510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:43:26.178115  507510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:43:26.194930  507510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35119
	I0116 03:43:26.195420  507510 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:43:26.195898  507510 main.go:141] libmachine: Using API Version  1
	I0116 03:43:26.195925  507510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:43:26.196303  507510 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:43:26.196517  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .DriverName
	I0116 03:43:26.196797  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetState
	I0116 03:43:26.198728  507510 fix.go:102] recreateIfNeeded on old-k8s-version-696770: state=Stopped err=<nil>
	I0116 03:43:26.198759  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .DriverName
	W0116 03:43:26.198959  507510 fix.go:128] unexpected machine state, will restart: <nil>
	I0116 03:43:26.201929  507510 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-696770" ...
	I0116 03:43:25.916953  507339 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 03:43:25.916987  507339 machine.go:91] provisioned docker machine in 1.05090319s
	I0116 03:43:25.917013  507339 start.go:300] post-start starting for "no-preload-666547" (driver="kvm2")
	I0116 03:43:25.917045  507339 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 03:43:25.917070  507339 main.go:141] libmachine: (no-preload-666547) Calling .DriverName
	I0116 03:43:25.917472  507339 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 03:43:25.917510  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHHostname
	I0116 03:43:25.920700  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:25.921097  507339 main.go:141] libmachine: (no-preload-666547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:5f:03", ip: ""} in network mk-no-preload-666547: {Iface:virbr2 ExpiryTime:2024-01-16 04:43:15 +0000 UTC Type:0 Mac:52:54:00:4e:5f:03 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:no-preload-666547 Clientid:01:52:54:00:4e:5f:03}
	I0116 03:43:25.921132  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined IP address 192.168.39.103 and MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:25.921386  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHPort
	I0116 03:43:25.921663  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHKeyPath
	I0116 03:43:25.921877  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHUsername
	I0116 03:43:25.922086  507339 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/no-preload-666547/id_rsa Username:docker}
	I0116 03:43:26.011987  507339 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 03:43:26.016777  507339 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 03:43:26.016813  507339 filesync.go:126] Scanning /home/jenkins/minikube-integration/17965-468241/.minikube/addons for local assets ...
	I0116 03:43:26.016889  507339 filesync.go:126] Scanning /home/jenkins/minikube-integration/17965-468241/.minikube/files for local assets ...
	I0116 03:43:26.016985  507339 filesync.go:149] local asset: /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem -> 4754782.pem in /etc/ssl/certs
	I0116 03:43:26.017109  507339 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 03:43:26.027303  507339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem --> /etc/ssl/certs/4754782.pem (1708 bytes)
	I0116 03:43:26.051806  507339 start.go:303] post-start completed in 134.758948ms
	I0116 03:43:26.051849  507339 fix.go:56] fixHost completed within 22.25110408s
	I0116 03:43:26.051881  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHHostname
	I0116 03:43:26.055165  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:26.055568  507339 main.go:141] libmachine: (no-preload-666547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:5f:03", ip: ""} in network mk-no-preload-666547: {Iface:virbr2 ExpiryTime:2024-01-16 04:43:15 +0000 UTC Type:0 Mac:52:54:00:4e:5f:03 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:no-preload-666547 Clientid:01:52:54:00:4e:5f:03}
	I0116 03:43:26.055605  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined IP address 192.168.39.103 and MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:26.055763  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHPort
	I0116 03:43:26.055983  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHKeyPath
	I0116 03:43:26.056222  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHKeyPath
	I0116 03:43:26.056407  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHUsername
	I0116 03:43:26.056579  507339 main.go:141] libmachine: Using SSH client type: native
	I0116 03:43:26.056930  507339 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I0116 03:43:26.056948  507339 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0116 03:43:26.177329  507339 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705376606.122912048
	
	I0116 03:43:26.177360  507339 fix.go:206] guest clock: 1705376606.122912048
	I0116 03:43:26.177367  507339 fix.go:219] Guest: 2024-01-16 03:43:26.122912048 +0000 UTC Remote: 2024-01-16 03:43:26.051855053 +0000 UTC m=+295.486361610 (delta=71.056995ms)
	I0116 03:43:26.177424  507339 fix.go:190] guest clock delta is within tolerance: 71.056995ms
	I0116 03:43:26.177430  507339 start.go:83] releasing machines lock for "no-preload-666547", held for 22.376720152s
	I0116 03:43:26.177461  507339 main.go:141] libmachine: (no-preload-666547) Calling .DriverName
	I0116 03:43:26.177761  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetIP
	I0116 03:43:26.180783  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:26.181087  507339 main.go:141] libmachine: (no-preload-666547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:5f:03", ip: ""} in network mk-no-preload-666547: {Iface:virbr2 ExpiryTime:2024-01-16 04:43:15 +0000 UTC Type:0 Mac:52:54:00:4e:5f:03 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:no-preload-666547 Clientid:01:52:54:00:4e:5f:03}
	I0116 03:43:26.181117  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined IP address 192.168.39.103 and MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:26.181281  507339 main.go:141] libmachine: (no-preload-666547) Calling .DriverName
	I0116 03:43:26.181876  507339 main.go:141] libmachine: (no-preload-666547) Calling .DriverName
	I0116 03:43:26.182068  507339 main.go:141] libmachine: (no-preload-666547) Calling .DriverName
	I0116 03:43:26.182154  507339 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 03:43:26.182203  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHHostname
	I0116 03:43:26.182337  507339 ssh_runner.go:195] Run: cat /version.json
	I0116 03:43:26.182366  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHHostname
	I0116 03:43:26.185253  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:26.185403  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:26.185625  507339 main.go:141] libmachine: (no-preload-666547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:5f:03", ip: ""} in network mk-no-preload-666547: {Iface:virbr2 ExpiryTime:2024-01-16 04:43:15 +0000 UTC Type:0 Mac:52:54:00:4e:5f:03 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:no-preload-666547 Clientid:01:52:54:00:4e:5f:03}
	I0116 03:43:26.185655  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined IP address 192.168.39.103 and MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:26.185807  507339 main.go:141] libmachine: (no-preload-666547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:5f:03", ip: ""} in network mk-no-preload-666547: {Iface:virbr2 ExpiryTime:2024-01-16 04:43:15 +0000 UTC Type:0 Mac:52:54:00:4e:5f:03 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:no-preload-666547 Clientid:01:52:54:00:4e:5f:03}
	I0116 03:43:26.185816  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHPort
	I0116 03:43:26.185855  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined IP address 192.168.39.103 and MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:26.185966  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHPort
	I0116 03:43:26.186041  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHKeyPath
	I0116 03:43:26.186137  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHKeyPath
	I0116 03:43:26.186220  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHUsername
	I0116 03:43:26.186306  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHUsername
	I0116 03:43:26.186383  507339 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/no-preload-666547/id_rsa Username:docker}
	I0116 03:43:26.186428  507339 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/no-preload-666547/id_rsa Username:docker}
	I0116 03:43:26.312441  507339 ssh_runner.go:195] Run: systemctl --version
	I0116 03:43:26.319016  507339 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 03:43:26.469427  507339 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0116 03:43:26.475759  507339 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 03:43:26.475896  507339 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 03:43:26.491920  507339 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0116 03:43:26.491952  507339 start.go:475] detecting cgroup driver to use...
	I0116 03:43:26.492112  507339 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 03:43:26.508122  507339 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 03:43:26.523664  507339 docker.go:217] disabling cri-docker service (if available) ...
	I0116 03:43:26.523754  507339 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 03:43:26.540173  507339 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 03:43:26.557370  507339 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 03:43:26.685134  507339 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 03:43:26.806555  507339 docker.go:233] disabling docker service ...
	I0116 03:43:26.806640  507339 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 03:43:26.821910  507339 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 03:43:26.836619  507339 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 03:43:26.950601  507339 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 03:43:27.077586  507339 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 03:43:27.091892  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 03:43:27.111772  507339 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0116 03:43:27.111856  507339 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:43:27.122183  507339 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 03:43:27.122261  507339 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:43:27.132861  507339 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:43:27.144003  507339 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:43:27.154747  507339 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 03:43:27.166236  507339 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 03:43:27.175337  507339 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0116 03:43:27.175410  507339 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0116 03:43:27.190891  507339 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 03:43:27.201216  507339 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 03:43:27.322701  507339 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 03:43:27.504197  507339 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 03:43:27.504292  507339 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 03:43:27.509879  507339 start.go:543] Will wait 60s for crictl version
	I0116 03:43:27.509972  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:43:27.514555  507339 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 03:43:27.556338  507339 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0116 03:43:27.556444  507339 ssh_runner.go:195] Run: crio --version
	I0116 03:43:27.615814  507339 ssh_runner.go:195] Run: crio --version
	I0116 03:43:27.666262  507339 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I0116 03:43:26.203694  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .Start
	I0116 03:43:26.203950  507510 main.go:141] libmachine: (old-k8s-version-696770) Ensuring networks are active...
	I0116 03:43:26.204831  507510 main.go:141] libmachine: (old-k8s-version-696770) Ensuring network default is active
	I0116 03:43:26.205251  507510 main.go:141] libmachine: (old-k8s-version-696770) Ensuring network mk-old-k8s-version-696770 is active
	I0116 03:43:26.205763  507510 main.go:141] libmachine: (old-k8s-version-696770) Getting domain xml...
	I0116 03:43:26.206485  507510 main.go:141] libmachine: (old-k8s-version-696770) Creating domain...
	I0116 03:43:26.558284  507510 main.go:141] libmachine: (old-k8s-version-696770) Waiting to get IP...
	I0116 03:43:26.559270  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:26.559701  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | unable to find current IP address of domain old-k8s-version-696770 in network mk-old-k8s-version-696770
	I0116 03:43:26.559793  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | I0116 03:43:26.559692  508427 retry.go:31] will retry after 243.799089ms: waiting for machine to come up
	I0116 03:43:26.805411  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:26.805914  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | unable to find current IP address of domain old-k8s-version-696770 in network mk-old-k8s-version-696770
	I0116 03:43:26.805948  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | I0116 03:43:26.805846  508427 retry.go:31] will retry after 346.727587ms: waiting for machine to come up
	I0116 03:43:27.154528  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:27.155074  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | unable to find current IP address of domain old-k8s-version-696770 in network mk-old-k8s-version-696770
	I0116 03:43:27.155107  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | I0116 03:43:27.155023  508427 retry.go:31] will retry after 357.633471ms: waiting for machine to come up
	I0116 03:43:27.514870  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:27.515490  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | unable to find current IP address of domain old-k8s-version-696770 in network mk-old-k8s-version-696770
	I0116 03:43:27.515517  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | I0116 03:43:27.515452  508427 retry.go:31] will retry after 582.001218ms: waiting for machine to come up
	I0116 03:43:28.099271  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:28.099783  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | unable to find current IP address of domain old-k8s-version-696770 in network mk-old-k8s-version-696770
	I0116 03:43:28.099817  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | I0116 03:43:28.099735  508427 retry.go:31] will retry after 747.661188ms: waiting for machine to come up
	I0116 03:43:28.849318  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:28.849836  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | unable to find current IP address of domain old-k8s-version-696770 in network mk-old-k8s-version-696770
	I0116 03:43:28.849872  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | I0116 03:43:28.849799  508427 retry.go:31] will retry after 953.610286ms: waiting for machine to come up
	I0116 03:43:27.667889  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetIP
	I0116 03:43:27.671385  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:27.671804  507339 main.go:141] libmachine: (no-preload-666547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:5f:03", ip: ""} in network mk-no-preload-666547: {Iface:virbr2 ExpiryTime:2024-01-16 04:43:15 +0000 UTC Type:0 Mac:52:54:00:4e:5f:03 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:no-preload-666547 Clientid:01:52:54:00:4e:5f:03}
	I0116 03:43:27.671840  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined IP address 192.168.39.103 and MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:27.672113  507339 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0116 03:43:27.676693  507339 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 03:43:27.690701  507339 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0116 03:43:27.690748  507339 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 03:43:27.731189  507339 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0116 03:43:27.731219  507339 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0116 03:43:27.731321  507339 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:43:27.731358  507339 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0116 03:43:27.731370  507339 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0116 03:43:27.731404  507339 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0116 03:43:27.731441  507339 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0116 03:43:27.731352  507339 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0116 03:43:27.731322  507339 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0116 03:43:27.731322  507339 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0116 03:43:27.733105  507339 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0116 03:43:27.733119  507339 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:43:27.733171  507339 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0116 03:43:27.733171  507339 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0116 03:43:27.733110  507339 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0116 03:43:27.733118  507339 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0116 03:43:27.733113  507339 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0116 03:43:27.733270  507339 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0116 03:43:27.900005  507339 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0116 03:43:27.901232  507339 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0116 03:43:27.903964  507339 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0116 03:43:27.907543  507339 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0116 03:43:27.908417  507339 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0116 03:43:27.909137  507339 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0116 03:43:27.953586  507339 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0116 03:43:28.024252  507339 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0116 03:43:28.024310  507339 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0116 03:43:28.024366  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:43:28.042716  507339 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:43:28.078379  507339 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0116 03:43:28.078438  507339 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0116 03:43:28.078503  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:43:28.179590  507339 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0116 03:43:28.179612  507339 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0116 03:43:28.179661  507339 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0116 03:43:28.179661  507339 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0116 03:43:28.179720  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:43:28.179722  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:43:28.179729  507339 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0116 03:43:28.179750  507339 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0116 03:43:28.179785  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:43:28.179804  507339 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0116 03:43:28.179865  507339 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0116 03:43:28.179906  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:43:28.179812  507339 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0116 03:43:28.179950  507339 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0116 03:43:28.179977  507339 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:43:28.180011  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:43:28.180009  507339 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0116 03:43:28.196999  507339 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0116 03:43:28.197021  507339 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0116 03:43:28.197157  507339 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0116 03:43:28.305002  507339 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0116 03:43:28.305117  507339 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0116 03:43:28.305044  507339 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:43:28.305231  507339 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0116 03:43:28.317016  507339 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0116 03:43:28.317149  507339 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0116 03:43:28.346291  507339 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0116 03:43:28.346393  507339 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0116 03:43:28.346434  507339 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0116 03:43:28.346518  507339 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0116 03:43:28.346547  507339 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0116 03:43:28.346598  507339 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0116 03:43:28.346618  507339 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0116 03:43:28.346631  507339 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0116 03:43:28.346650  507339 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0116 03:43:28.425129  507339 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0116 03:43:28.425217  507339 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0116 03:43:28.425319  507339 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0116 03:43:28.425317  507339 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0116 03:43:28.425377  507339 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0116 03:43:28.425391  507339 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0116 03:43:28.425441  507339 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0116 03:43:29.805277  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:29.805642  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | unable to find current IP address of domain old-k8s-version-696770 in network mk-old-k8s-version-696770
	I0116 03:43:29.805677  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | I0116 03:43:29.805586  508427 retry.go:31] will retry after 734.396993ms: waiting for machine to come up
	I0116 03:43:30.541337  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:30.541794  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | unable to find current IP address of domain old-k8s-version-696770 in network mk-old-k8s-version-696770
	I0116 03:43:30.541828  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | I0116 03:43:30.541741  508427 retry.go:31] will retry after 1.035836118s: waiting for machine to come up
	I0116 03:43:31.579576  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:31.580093  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | unable to find current IP address of domain old-k8s-version-696770 in network mk-old-k8s-version-696770
	I0116 03:43:31.580118  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | I0116 03:43:31.580070  508427 retry.go:31] will retry after 1.723172168s: waiting for machine to come up
	I0116 03:43:33.305247  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:33.305726  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | unable to find current IP address of domain old-k8s-version-696770 in network mk-old-k8s-version-696770
	I0116 03:43:33.305759  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | I0116 03:43:33.305669  508427 retry.go:31] will retry after 1.465747661s: waiting for machine to come up
	I0116 03:43:32.396858  507339 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (4.050189724s)
	I0116 03:43:32.396913  507339 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0116 03:43:32.396956  507339 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (3.971489155s)
	I0116 03:43:32.397006  507339 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0116 03:43:32.397028  507339 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (3.971686012s)
	I0116 03:43:32.397043  507339 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0116 03:43:32.397050  507339 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (4.050383438s)
	I0116 03:43:32.397063  507339 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0116 03:43:32.397093  507339 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0116 03:43:32.397172  507339 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0116 03:43:35.381615  507339 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.98440652s)
	I0116 03:43:35.381660  507339 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0116 03:43:35.381699  507339 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0116 03:43:35.381759  507339 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0116 03:43:34.773552  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:34.774149  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | unable to find current IP address of domain old-k8s-version-696770 in network mk-old-k8s-version-696770
	I0116 03:43:34.774182  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | I0116 03:43:34.774084  508427 retry.go:31] will retry after 1.94747868s: waiting for machine to come up
	I0116 03:43:36.722855  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:36.723416  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | unable to find current IP address of domain old-k8s-version-696770 in network mk-old-k8s-version-696770
	I0116 03:43:36.723448  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | I0116 03:43:36.723365  508427 retry.go:31] will retry after 2.550966562s: waiting for machine to come up
	I0116 03:43:39.276082  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:39.276671  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | unable to find current IP address of domain old-k8s-version-696770 in network mk-old-k8s-version-696770
	I0116 03:43:39.276710  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | I0116 03:43:39.276608  508427 retry.go:31] will retry after 3.317854993s: waiting for machine to come up
	I0116 03:43:38.162725  507339 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.780935577s)
	I0116 03:43:38.162760  507339 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0116 03:43:38.162792  507339 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0116 03:43:38.162843  507339 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0116 03:43:39.527575  507339 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.36469752s)
	I0116 03:43:39.527612  507339 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0116 03:43:39.527639  507339 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0116 03:43:39.527696  507339 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0116 03:43:42.595994  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:42.596424  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | unable to find current IP address of domain old-k8s-version-696770 in network mk-old-k8s-version-696770
	I0116 03:43:42.596458  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | I0116 03:43:42.596364  508427 retry.go:31] will retry after 4.913808783s: waiting for machine to come up
	I0116 03:43:41.690968  507339 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.16323953s)
	I0116 03:43:41.691007  507339 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0116 03:43:41.691045  507339 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0116 03:43:41.691100  507339 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0116 03:43:43.849988  507339 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.158855886s)
	I0116 03:43:43.850023  507339 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0116 03:43:43.850052  507339 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0116 03:43:43.850107  507339 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0116 03:43:44.597660  507339 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0116 03:43:44.597710  507339 cache_images.go:123] Successfully loaded all cached images
	I0116 03:43:44.597715  507339 cache_images.go:92] LoadImages completed in 16.866481277s
	I0116 03:43:44.597788  507339 ssh_runner.go:195] Run: crio config
	I0116 03:43:44.658055  507339 cni.go:84] Creating CNI manager for ""
	I0116 03:43:44.658081  507339 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:43:44.658104  507339 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 03:43:44.658124  507339 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.103 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-666547 NodeName:no-preload-666547 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.103"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.103 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0116 03:43:44.658290  507339 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.103
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-666547"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.103
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.103"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 03:43:44.658371  507339 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-666547 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.103
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-666547 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0116 03:43:44.658431  507339 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0116 03:43:44.668859  507339 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 03:43:44.668934  507339 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0116 03:43:44.678543  507339 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I0116 03:43:44.694998  507339 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0116 03:43:44.711256  507339 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I0116 03:43:44.728203  507339 ssh_runner.go:195] Run: grep 192.168.39.103	control-plane.minikube.internal$ /etc/hosts
	I0116 03:43:44.732219  507339 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.103	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 03:43:44.744687  507339 certs.go:56] Setting up /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/no-preload-666547 for IP: 192.168.39.103
	I0116 03:43:44.744730  507339 certs.go:190] acquiring lock for shared ca certs: {Name:mkab5aeea3e0ed69f3592dc8f418e07c6075b130 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:43:44.744957  507339 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17965-468241/.minikube/ca.key
	I0116 03:43:44.745014  507339 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.key
	I0116 03:43:44.745133  507339 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/no-preload-666547/client.key
	I0116 03:43:44.745226  507339 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/no-preload-666547/apiserver.key.f0189397
	I0116 03:43:44.745293  507339 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/no-preload-666547/proxy-client.key
	I0116 03:43:44.745431  507339 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478.pem (1338 bytes)
	W0116 03:43:44.745471  507339 certs.go:433] ignoring /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478_empty.pem, impossibly tiny 0 bytes
	I0116 03:43:44.745488  507339 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca-key.pem (1679 bytes)
	I0116 03:43:44.745541  507339 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem (1078 bytes)
	I0116 03:43:44.745582  507339 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem (1123 bytes)
	I0116 03:43:44.745620  507339 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem (1679 bytes)
	I0116 03:43:44.745687  507339 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem (1708 bytes)
	I0116 03:43:44.746558  507339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/no-preload-666547/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0116 03:43:44.770889  507339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/no-preload-666547/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0116 03:43:44.795150  507339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/no-preload-666547/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0116 03:43:44.818047  507339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/no-preload-666547/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0116 03:43:44.842003  507339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 03:43:44.866125  507339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0116 03:43:44.890235  507339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 03:43:44.913732  507339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 03:43:44.937249  507339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478.pem --> /usr/share/ca-certificates/475478.pem (1338 bytes)
	I0116 03:43:44.961628  507339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem --> /usr/share/ca-certificates/4754782.pem (1708 bytes)
	I0116 03:43:44.986672  507339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 03:43:45.010735  507339 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0116 03:43:45.028537  507339 ssh_runner.go:195] Run: openssl version
	I0116 03:43:45.034910  507339 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/475478.pem && ln -fs /usr/share/ca-certificates/475478.pem /etc/ssl/certs/475478.pem"
	I0116 03:43:45.046034  507339 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/475478.pem
	I0116 03:43:45.050965  507339 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 02:44 /usr/share/ca-certificates/475478.pem
	I0116 03:43:45.051059  507339 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/475478.pem
	I0116 03:43:45.057465  507339 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/475478.pem /etc/ssl/certs/51391683.0"
	I0116 03:43:45.068400  507339 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4754782.pem && ln -fs /usr/share/ca-certificates/4754782.pem /etc/ssl/certs/4754782.pem"
	I0116 03:43:45.079619  507339 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4754782.pem
	I0116 03:43:45.084545  507339 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 02:44 /usr/share/ca-certificates/4754782.pem
	I0116 03:43:45.084622  507339 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4754782.pem
	I0116 03:43:45.090638  507339 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4754782.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 03:43:45.101658  507339 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 03:43:45.113091  507339 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:43:45.118085  507339 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 02:35 /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:43:45.118153  507339 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:43:45.124100  507339 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 03:43:45.135338  507339 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 03:43:45.140230  507339 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0116 03:43:45.146566  507339 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0116 03:43:45.152839  507339 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0116 03:43:45.158917  507339 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0116 03:43:45.164984  507339 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0116 03:43:45.171049  507339 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0116 03:43:45.177547  507339 kubeadm.go:404] StartCluster: {Name:no-preload-666547 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:no-preload-666547 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.103 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 03:43:45.177657  507339 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0116 03:43:45.177719  507339 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 03:43:45.221757  507339 cri.go:89] found id: ""
	I0116 03:43:45.221848  507339 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0116 03:43:45.233811  507339 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0116 03:43:45.233838  507339 kubeadm.go:636] restartCluster start
	I0116 03:43:45.233906  507339 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0116 03:43:45.244810  507339 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:45.245999  507339 kubeconfig.go:92] found "no-preload-666547" server: "https://192.168.39.103:8443"
	I0116 03:43:45.248711  507339 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0116 03:43:45.260979  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:45.261066  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:45.276682  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:48.709239  507889 start.go:369] acquired machines lock for "default-k8s-diff-port-434445" in 3m31.985691976s
	I0116 03:43:48.709311  507889 start.go:96] Skipping create...Using existing machine configuration
	I0116 03:43:48.709333  507889 fix.go:54] fixHost starting: 
	I0116 03:43:48.709815  507889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:43:48.709867  507889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:43:48.726637  507889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45373
	I0116 03:43:48.727122  507889 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:43:48.727702  507889 main.go:141] libmachine: Using API Version  1
	I0116 03:43:48.727737  507889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:43:48.728104  507889 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:43:48.728310  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .DriverName
	I0116 03:43:48.728475  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetState
	I0116 03:43:48.730338  507889 fix.go:102] recreateIfNeeded on default-k8s-diff-port-434445: state=Stopped err=<nil>
	I0116 03:43:48.730361  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .DriverName
	W0116 03:43:48.730545  507889 fix.go:128] unexpected machine state, will restart: <nil>
	I0116 03:43:48.733848  507889 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-434445" ...
	I0116 03:43:47.512288  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:47.512755  507510 main.go:141] libmachine: (old-k8s-version-696770) Found IP for machine: 192.168.61.167
	I0116 03:43:47.512793  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has current primary IP address 192.168.61.167 and MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:47.512804  507510 main.go:141] libmachine: (old-k8s-version-696770) Reserving static IP address...
	I0116 03:43:47.513157  507510 main.go:141] libmachine: (old-k8s-version-696770) Reserved static IP address: 192.168.61.167
	I0116 03:43:47.513194  507510 main.go:141] libmachine: (old-k8s-version-696770) Waiting for SSH to be available...
	I0116 03:43:47.513218  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | found host DHCP lease matching {name: "old-k8s-version-696770", mac: "52:54:00:37:20:1a", ip: "192.168.61.167"} in network mk-old-k8s-version-696770: {Iface:virbr3 ExpiryTime:2024-01-16 04:43:38 +0000 UTC Type:0 Mac:52:54:00:37:20:1a Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:old-k8s-version-696770 Clientid:01:52:54:00:37:20:1a}
	I0116 03:43:47.513242  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | skip adding static IP to network mk-old-k8s-version-696770 - found existing host DHCP lease matching {name: "old-k8s-version-696770", mac: "52:54:00:37:20:1a", ip: "192.168.61.167"}
	I0116 03:43:47.513259  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | Getting to WaitForSSH function...
	I0116 03:43:47.515438  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:47.515887  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:20:1a", ip: ""} in network mk-old-k8s-version-696770: {Iface:virbr3 ExpiryTime:2024-01-16 04:43:38 +0000 UTC Type:0 Mac:52:54:00:37:20:1a Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:old-k8s-version-696770 Clientid:01:52:54:00:37:20:1a}
	I0116 03:43:47.515928  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined IP address 192.168.61.167 and MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:47.516089  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | Using SSH client type: external
	I0116 03:43:47.516124  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | Using SSH private key: /home/jenkins/minikube-integration/17965-468241/.minikube/machines/old-k8s-version-696770/id_rsa (-rw-------)
	I0116 03:43:47.516160  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.167 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17965-468241/.minikube/machines/old-k8s-version-696770/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0116 03:43:47.516182  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | About to run SSH command:
	I0116 03:43:47.516203  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | exit 0
	I0116 03:43:47.608193  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | SSH cmd err, output: <nil>: 
	I0116 03:43:47.608599  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetConfigRaw
	I0116 03:43:47.609195  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetIP
	I0116 03:43:47.611633  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:47.612018  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:20:1a", ip: ""} in network mk-old-k8s-version-696770: {Iface:virbr3 ExpiryTime:2024-01-16 04:43:38 +0000 UTC Type:0 Mac:52:54:00:37:20:1a Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:old-k8s-version-696770 Clientid:01:52:54:00:37:20:1a}
	I0116 03:43:47.612068  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined IP address 192.168.61.167 and MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:47.612355  507510 profile.go:148] Saving config to /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/old-k8s-version-696770/config.json ...
	I0116 03:43:47.612601  507510 machine.go:88] provisioning docker machine ...
	I0116 03:43:47.612628  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .DriverName
	I0116 03:43:47.612872  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetMachineName
	I0116 03:43:47.613047  507510 buildroot.go:166] provisioning hostname "old-k8s-version-696770"
	I0116 03:43:47.613068  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetMachineName
	I0116 03:43:47.613195  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHHostname
	I0116 03:43:47.615457  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:47.615901  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:20:1a", ip: ""} in network mk-old-k8s-version-696770: {Iface:virbr3 ExpiryTime:2024-01-16 04:43:38 +0000 UTC Type:0 Mac:52:54:00:37:20:1a Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:old-k8s-version-696770 Clientid:01:52:54:00:37:20:1a}
	I0116 03:43:47.615928  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined IP address 192.168.61.167 and MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:47.616111  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHPort
	I0116 03:43:47.616292  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHKeyPath
	I0116 03:43:47.616489  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHKeyPath
	I0116 03:43:47.616687  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHUsername
	I0116 03:43:47.616889  507510 main.go:141] libmachine: Using SSH client type: native
	I0116 03:43:47.617280  507510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.167 22 <nil> <nil>}
	I0116 03:43:47.617297  507510 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-696770 && echo "old-k8s-version-696770" | sudo tee /etc/hostname
	I0116 03:43:47.745448  507510 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-696770
	
	I0116 03:43:47.745482  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHHostname
	I0116 03:43:47.748812  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:47.749135  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:20:1a", ip: ""} in network mk-old-k8s-version-696770: {Iface:virbr3 ExpiryTime:2024-01-16 04:43:38 +0000 UTC Type:0 Mac:52:54:00:37:20:1a Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:old-k8s-version-696770 Clientid:01:52:54:00:37:20:1a}
	I0116 03:43:47.749171  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined IP address 192.168.61.167 and MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:47.749296  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHPort
	I0116 03:43:47.749525  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHKeyPath
	I0116 03:43:47.749715  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHKeyPath
	I0116 03:43:47.749872  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHUsername
	I0116 03:43:47.750019  507510 main.go:141] libmachine: Using SSH client type: native
	I0116 03:43:47.750339  507510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.167 22 <nil> <nil>}
	I0116 03:43:47.750357  507510 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-696770' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-696770/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-696770' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 03:43:47.876917  507510 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 03:43:47.876957  507510 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17965-468241/.minikube CaCertPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17965-468241/.minikube}
	I0116 03:43:47.877011  507510 buildroot.go:174] setting up certificates
	I0116 03:43:47.877026  507510 provision.go:83] configureAuth start
	I0116 03:43:47.877041  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetMachineName
	I0116 03:43:47.877378  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetIP
	I0116 03:43:47.880453  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:47.880836  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:20:1a", ip: ""} in network mk-old-k8s-version-696770: {Iface:virbr3 ExpiryTime:2024-01-16 04:43:38 +0000 UTC Type:0 Mac:52:54:00:37:20:1a Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:old-k8s-version-696770 Clientid:01:52:54:00:37:20:1a}
	I0116 03:43:47.880869  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined IP address 192.168.61.167 and MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:47.881010  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHHostname
	I0116 03:43:47.883053  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:47.883415  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:20:1a", ip: ""} in network mk-old-k8s-version-696770: {Iface:virbr3 ExpiryTime:2024-01-16 04:43:38 +0000 UTC Type:0 Mac:52:54:00:37:20:1a Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:old-k8s-version-696770 Clientid:01:52:54:00:37:20:1a}
	I0116 03:43:47.883448  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined IP address 192.168.61.167 and MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:47.883635  507510 provision.go:138] copyHostCerts
	I0116 03:43:47.883706  507510 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem, removing ...
	I0116 03:43:47.883717  507510 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem
	I0116 03:43:47.883778  507510 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem (1078 bytes)
	I0116 03:43:47.883864  507510 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem, removing ...
	I0116 03:43:47.883871  507510 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem
	I0116 03:43:47.883893  507510 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem (1123 bytes)
	I0116 03:43:47.883943  507510 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem, removing ...
	I0116 03:43:47.883950  507510 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem
	I0116 03:43:47.883965  507510 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem (1679 bytes)
	I0116 03:43:47.884010  507510 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-696770 san=[192.168.61.167 192.168.61.167 localhost 127.0.0.1 minikube old-k8s-version-696770]
	I0116 03:43:47.946258  507510 provision.go:172] copyRemoteCerts
	I0116 03:43:47.946327  507510 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 03:43:47.946354  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHHostname
	I0116 03:43:47.949417  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:47.949750  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:20:1a", ip: ""} in network mk-old-k8s-version-696770: {Iface:virbr3 ExpiryTime:2024-01-16 04:43:38 +0000 UTC Type:0 Mac:52:54:00:37:20:1a Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:old-k8s-version-696770 Clientid:01:52:54:00:37:20:1a}
	I0116 03:43:47.949784  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined IP address 192.168.61.167 and MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:47.949941  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHPort
	I0116 03:43:47.950180  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHKeyPath
	I0116 03:43:47.950333  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHUsername
	I0116 03:43:47.950478  507510 sshutil.go:53] new ssh client: &{IP:192.168.61.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/old-k8s-version-696770/id_rsa Username:docker}
	I0116 03:43:48.042564  507510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0116 03:43:48.066519  507510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0116 03:43:48.090127  507510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0116 03:43:48.113387  507510 provision.go:86] duration metric: configureAuth took 236.343393ms
	I0116 03:43:48.113428  507510 buildroot.go:189] setting minikube options for container-runtime
	I0116 03:43:48.113662  507510 config.go:182] Loaded profile config "old-k8s-version-696770": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0116 03:43:48.113758  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHHostname
	I0116 03:43:48.116735  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:48.117144  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:20:1a", ip: ""} in network mk-old-k8s-version-696770: {Iface:virbr3 ExpiryTime:2024-01-16 04:43:38 +0000 UTC Type:0 Mac:52:54:00:37:20:1a Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:old-k8s-version-696770 Clientid:01:52:54:00:37:20:1a}
	I0116 03:43:48.117187  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined IP address 192.168.61.167 and MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:48.117328  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHPort
	I0116 03:43:48.117529  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHKeyPath
	I0116 03:43:48.117725  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHKeyPath
	I0116 03:43:48.117892  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHUsername
	I0116 03:43:48.118118  507510 main.go:141] libmachine: Using SSH client type: native
	I0116 03:43:48.118427  507510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.167 22 <nil> <nil>}
	I0116 03:43:48.118450  507510 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 03:43:48.458094  507510 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 03:43:48.458129  507510 machine.go:91] provisioned docker machine in 845.51167ms
	I0116 03:43:48.458141  507510 start.go:300] post-start starting for "old-k8s-version-696770" (driver="kvm2")
	I0116 03:43:48.458153  507510 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 03:43:48.458172  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .DriverName
	I0116 03:43:48.458616  507510 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 03:43:48.458650  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHHostname
	I0116 03:43:48.461476  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:48.461858  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:20:1a", ip: ""} in network mk-old-k8s-version-696770: {Iface:virbr3 ExpiryTime:2024-01-16 04:43:38 +0000 UTC Type:0 Mac:52:54:00:37:20:1a Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:old-k8s-version-696770 Clientid:01:52:54:00:37:20:1a}
	I0116 03:43:48.461908  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined IP address 192.168.61.167 and MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:48.462029  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHPort
	I0116 03:43:48.462272  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHKeyPath
	I0116 03:43:48.462460  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHUsername
	I0116 03:43:48.462643  507510 sshutil.go:53] new ssh client: &{IP:192.168.61.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/old-k8s-version-696770/id_rsa Username:docker}
	I0116 03:43:48.550436  507510 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 03:43:48.555225  507510 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 03:43:48.555261  507510 filesync.go:126] Scanning /home/jenkins/minikube-integration/17965-468241/.minikube/addons for local assets ...
	I0116 03:43:48.555349  507510 filesync.go:126] Scanning /home/jenkins/minikube-integration/17965-468241/.minikube/files for local assets ...
	I0116 03:43:48.555434  507510 filesync.go:149] local asset: /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem -> 4754782.pem in /etc/ssl/certs
	I0116 03:43:48.555560  507510 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 03:43:48.565598  507510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem --> /etc/ssl/certs/4754782.pem (1708 bytes)
	I0116 03:43:48.588611  507510 start.go:303] post-start completed in 130.45305ms
	I0116 03:43:48.588642  507510 fix.go:56] fixHost completed within 22.411021213s
	I0116 03:43:48.588675  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHHostname
	I0116 03:43:48.591220  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:48.591582  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:20:1a", ip: ""} in network mk-old-k8s-version-696770: {Iface:virbr3 ExpiryTime:2024-01-16 04:43:38 +0000 UTC Type:0 Mac:52:54:00:37:20:1a Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:old-k8s-version-696770 Clientid:01:52:54:00:37:20:1a}
	I0116 03:43:48.591618  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined IP address 192.168.61.167 and MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:48.591779  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHPort
	I0116 03:43:48.592014  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHKeyPath
	I0116 03:43:48.592216  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHKeyPath
	I0116 03:43:48.592412  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHUsername
	I0116 03:43:48.592567  507510 main.go:141] libmachine: Using SSH client type: native
	I0116 03:43:48.592933  507510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.167 22 <nil> <nil>}
	I0116 03:43:48.592950  507510 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0116 03:43:48.709079  507510 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705376628.651647278
	
	I0116 03:43:48.709103  507510 fix.go:206] guest clock: 1705376628.651647278
	I0116 03:43:48.709111  507510 fix.go:219] Guest: 2024-01-16 03:43:48.651647278 +0000 UTC Remote: 2024-01-16 03:43:48.588648172 +0000 UTC m=+299.078902394 (delta=62.999106ms)
	I0116 03:43:48.709134  507510 fix.go:190] guest clock delta is within tolerance: 62.999106ms
	I0116 03:43:48.709140  507510 start.go:83] releasing machines lock for "old-k8s-version-696770", held for 22.531556099s
	I0116 03:43:48.709169  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .DriverName
	I0116 03:43:48.709519  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetIP
	I0116 03:43:48.712438  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:48.712770  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:20:1a", ip: ""} in network mk-old-k8s-version-696770: {Iface:virbr3 ExpiryTime:2024-01-16 04:43:38 +0000 UTC Type:0 Mac:52:54:00:37:20:1a Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:old-k8s-version-696770 Clientid:01:52:54:00:37:20:1a}
	I0116 03:43:48.712825  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined IP address 192.168.61.167 and MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:48.712921  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .DriverName
	I0116 03:43:48.713501  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .DriverName
	I0116 03:43:48.713677  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .DriverName
	I0116 03:43:48.713768  507510 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 03:43:48.713816  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHHostname
	I0116 03:43:48.713920  507510 ssh_runner.go:195] Run: cat /version.json
	I0116 03:43:48.713951  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHHostname
	I0116 03:43:48.716415  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:48.716697  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:48.716820  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:20:1a", ip: ""} in network mk-old-k8s-version-696770: {Iface:virbr3 ExpiryTime:2024-01-16 04:43:38 +0000 UTC Type:0 Mac:52:54:00:37:20:1a Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:old-k8s-version-696770 Clientid:01:52:54:00:37:20:1a}
	I0116 03:43:48.716846  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined IP address 192.168.61.167 and MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:48.716995  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHPort
	I0116 03:43:48.717093  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:20:1a", ip: ""} in network mk-old-k8s-version-696770: {Iface:virbr3 ExpiryTime:2024-01-16 04:43:38 +0000 UTC Type:0 Mac:52:54:00:37:20:1a Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:old-k8s-version-696770 Clientid:01:52:54:00:37:20:1a}
	I0116 03:43:48.717123  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined IP address 192.168.61.167 and MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:48.717394  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHPort
	I0116 03:43:48.717402  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHKeyPath
	I0116 03:43:48.717638  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHKeyPath
	I0116 03:43:48.717650  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHUsername
	I0116 03:43:48.717791  507510 sshutil.go:53] new ssh client: &{IP:192.168.61.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/old-k8s-version-696770/id_rsa Username:docker}
	I0116 03:43:48.717824  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHUsername
	I0116 03:43:48.717956  507510 sshutil.go:53] new ssh client: &{IP:192.168.61.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/old-k8s-version-696770/id_rsa Username:docker}
	I0116 03:43:48.838506  507510 ssh_runner.go:195] Run: systemctl --version
	I0116 03:43:48.845152  507510 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 03:43:49.001791  507510 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0116 03:43:49.008474  507510 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 03:43:49.008558  507510 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 03:43:49.024030  507510 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0116 03:43:49.024087  507510 start.go:475] detecting cgroup driver to use...
	I0116 03:43:49.024164  507510 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 03:43:49.038853  507510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 03:43:49.056228  507510 docker.go:217] disabling cri-docker service (if available) ...
	I0116 03:43:49.056308  507510 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 03:43:49.071266  507510 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 03:43:49.085793  507510 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 03:43:49.211294  507510 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 03:43:49.338893  507510 docker.go:233] disabling docker service ...
	I0116 03:43:49.338971  507510 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 03:43:49.354423  507510 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 03:43:49.367355  507510 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 03:43:49.483277  507510 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 03:43:49.593977  507510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 03:43:49.607374  507510 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 03:43:49.626781  507510 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0116 03:43:49.626846  507510 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:43:49.637809  507510 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 03:43:49.637892  507510 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:43:49.648162  507510 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:43:49.658305  507510 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:43:49.669557  507510 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 03:43:49.680190  507510 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 03:43:49.689125  507510 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0116 03:43:49.689199  507510 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0116 03:43:49.703247  507510 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 03:43:49.713826  507510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 03:43:49.829677  507510 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 03:43:50.009393  507510 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 03:43:50.009489  507510 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 03:43:50.016443  507510 start.go:543] Will wait 60s for crictl version
	I0116 03:43:50.016521  507510 ssh_runner.go:195] Run: which crictl
	I0116 03:43:50.020560  507510 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 03:43:50.056652  507510 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0116 03:43:50.056733  507510 ssh_runner.go:195] Run: crio --version
	I0116 03:43:50.104202  507510 ssh_runner.go:195] Run: crio --version
	I0116 03:43:50.150215  507510 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0116 03:43:45.761989  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:45.762077  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:45.776377  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:46.262107  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:46.262205  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:46.274748  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:46.761344  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:46.761434  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:46.773509  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:47.261093  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:47.261222  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:47.272584  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:47.761119  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:47.761204  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:47.773674  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:48.261288  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:48.261448  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:48.273461  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:48.762071  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:48.762205  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:48.778093  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:49.261032  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:49.261139  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:49.273090  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:49.761233  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:49.761348  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:49.773529  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:50.261720  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:50.261822  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:50.277403  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:48.735627  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .Start
	I0116 03:43:48.735865  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Ensuring networks are active...
	I0116 03:43:48.736708  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Ensuring network default is active
	I0116 03:43:48.737105  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Ensuring network mk-default-k8s-diff-port-434445 is active
	I0116 03:43:48.737445  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Getting domain xml...
	I0116 03:43:48.738086  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Creating domain...
	I0116 03:43:49.085479  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting to get IP...
	I0116 03:43:49.086513  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:43:49.086907  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | unable to find current IP address of domain default-k8s-diff-port-434445 in network mk-default-k8s-diff-port-434445
	I0116 03:43:49.086993  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | I0116 03:43:49.086879  508579 retry.go:31] will retry after 251.682416ms: waiting for machine to come up
	I0116 03:43:49.340560  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:43:49.341196  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | unable to find current IP address of domain default-k8s-diff-port-434445 in network mk-default-k8s-diff-port-434445
	I0116 03:43:49.341235  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | I0116 03:43:49.341140  508579 retry.go:31] will retry after 288.322607ms: waiting for machine to come up
	I0116 03:43:49.630920  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:43:49.631449  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | unable to find current IP address of domain default-k8s-diff-port-434445 in network mk-default-k8s-diff-port-434445
	I0116 03:43:49.631478  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | I0116 03:43:49.631404  508579 retry.go:31] will retry after 305.730946ms: waiting for machine to come up
	I0116 03:43:49.938846  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:43:49.939357  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | unable to find current IP address of domain default-k8s-diff-port-434445 in network mk-default-k8s-diff-port-434445
	I0116 03:43:49.939381  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | I0116 03:43:49.939307  508579 retry.go:31] will retry after 431.952943ms: waiting for machine to come up
	I0116 03:43:50.372921  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:43:50.373426  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | unable to find current IP address of domain default-k8s-diff-port-434445 in network mk-default-k8s-diff-port-434445
	I0116 03:43:50.373453  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | I0116 03:43:50.373368  508579 retry.go:31] will retry after 557.336026ms: waiting for machine to come up
	I0116 03:43:50.932300  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:43:50.932902  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | unable to find current IP address of domain default-k8s-diff-port-434445 in network mk-default-k8s-diff-port-434445
	I0116 03:43:50.932933  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | I0116 03:43:50.932837  508579 retry.go:31] will retry after 652.034162ms: waiting for machine to come up
	I0116 03:43:51.586765  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:43:51.587332  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | unable to find current IP address of domain default-k8s-diff-port-434445 in network mk-default-k8s-diff-port-434445
	I0116 03:43:51.587365  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | I0116 03:43:51.587290  508579 retry.go:31] will retry after 1.078418867s: waiting for machine to come up
	I0116 03:43:50.151763  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetIP
	I0116 03:43:50.154861  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:50.155283  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:20:1a", ip: ""} in network mk-old-k8s-version-696770: {Iface:virbr3 ExpiryTime:2024-01-16 04:43:38 +0000 UTC Type:0 Mac:52:54:00:37:20:1a Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:old-k8s-version-696770 Clientid:01:52:54:00:37:20:1a}
	I0116 03:43:50.155331  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined IP address 192.168.61.167 and MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:50.155536  507510 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0116 03:43:50.160159  507510 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 03:43:50.173354  507510 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0116 03:43:50.173416  507510 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 03:43:50.227220  507510 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0116 03:43:50.227308  507510 ssh_runner.go:195] Run: which lz4
	I0116 03:43:50.231565  507510 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0116 03:43:50.236133  507510 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0116 03:43:50.236169  507510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0116 03:43:52.243584  507510 crio.go:444] Took 2.012049 seconds to copy over tarball
	I0116 03:43:52.243686  507510 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0116 03:43:50.761232  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:50.761323  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:50.777877  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:51.261357  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:51.261444  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:51.280624  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:51.761117  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:51.761225  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:51.775076  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:52.261857  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:52.261948  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:52.279844  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:52.761400  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:52.761493  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:52.773869  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:53.261155  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:53.261263  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:53.273774  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:53.761370  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:53.761500  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:53.773900  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:54.262012  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:54.262134  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:54.277928  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:54.761492  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:54.761642  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:54.774531  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:55.261302  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:55.261395  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:55.274178  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:55.274226  507339 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0116 03:43:55.274272  507339 kubeadm.go:1135] stopping kube-system containers ...
	I0116 03:43:55.274293  507339 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0116 03:43:55.274360  507339 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 03:43:55.321847  507339 cri.go:89] found id: ""
	I0116 03:43:55.321943  507339 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0116 03:43:55.339190  507339 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 03:43:55.348548  507339 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 03:43:55.348637  507339 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 03:43:55.358316  507339 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0116 03:43:55.358345  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:43:55.492932  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:43:52.667882  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:43:52.668380  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | unable to find current IP address of domain default-k8s-diff-port-434445 in network mk-default-k8s-diff-port-434445
	I0116 03:43:52.668415  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | I0116 03:43:52.668311  508579 retry.go:31] will retry after 1.052441827s: waiting for machine to come up
	I0116 03:43:53.722859  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:43:53.723473  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | unable to find current IP address of domain default-k8s-diff-port-434445 in network mk-default-k8s-diff-port-434445
	I0116 03:43:53.723503  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | I0116 03:43:53.723429  508579 retry.go:31] will retry after 1.233090848s: waiting for machine to come up
	I0116 03:43:54.958519  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:43:54.958990  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | unable to find current IP address of domain default-k8s-diff-port-434445 in network mk-default-k8s-diff-port-434445
	I0116 03:43:54.959014  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | I0116 03:43:54.958934  508579 retry.go:31] will retry after 2.038449182s: waiting for machine to come up
	I0116 03:43:55.109598  507510 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.865872133s)
	I0116 03:43:55.109637  507510 crio.go:451] Took 2.866019 seconds to extract the tarball
	I0116 03:43:55.109652  507510 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0116 03:43:55.150763  507510 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 03:43:55.206497  507510 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0116 03:43:55.206525  507510 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0116 03:43:55.206597  507510 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:43:55.206619  507510 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0116 03:43:55.206660  507510 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0116 03:43:55.206682  507510 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0116 03:43:55.206601  507510 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0116 03:43:55.206622  507510 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0116 03:43:55.206790  507510 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0116 03:43:55.206801  507510 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0116 03:43:55.208228  507510 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:43:55.208246  507510 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0116 03:43:55.208245  507510 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0116 03:43:55.208247  507510 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0116 03:43:55.208291  507510 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0116 03:43:55.208295  507510 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0116 03:43:55.208291  507510 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0116 03:43:55.208610  507510 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0116 03:43:55.364082  507510 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0116 03:43:55.364096  507510 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0116 03:43:55.367820  507510 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0116 03:43:55.371639  507510 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0116 03:43:55.379423  507510 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0116 03:43:55.383569  507510 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0116 03:43:55.385854  507510 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0116 03:43:55.522241  507510 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:43:55.539971  507510 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0116 03:43:55.540031  507510 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0116 03:43:55.540113  507510 ssh_runner.go:195] Run: which crictl
	I0116 03:43:55.542332  507510 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0116 03:43:55.542389  507510 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0116 03:43:55.542441  507510 ssh_runner.go:195] Run: which crictl
	I0116 03:43:55.565552  507510 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0116 03:43:55.565679  507510 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0116 03:43:55.565761  507510 ssh_runner.go:195] Run: which crictl
	I0116 03:43:55.583839  507510 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0116 03:43:55.583890  507510 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0116 03:43:55.583942  507510 ssh_runner.go:195] Run: which crictl
	I0116 03:43:55.583847  507510 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0116 03:43:55.584023  507510 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0116 03:43:55.584073  507510 ssh_runner.go:195] Run: which crictl
	I0116 03:43:55.596487  507510 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0116 03:43:55.596555  507510 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0116 03:43:55.596619  507510 ssh_runner.go:195] Run: which crictl
	I0116 03:43:55.605042  507510 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0116 03:43:55.605105  507510 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0116 03:43:55.605162  507510 ssh_runner.go:195] Run: which crictl
	I0116 03:43:55.740186  507510 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0116 03:43:55.740225  507510 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0116 03:43:55.740283  507510 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0116 03:43:55.740334  507510 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0116 03:43:55.740384  507510 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0116 03:43:55.740441  507510 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0116 03:43:55.740450  507510 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0116 03:43:55.900542  507510 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0116 03:43:55.906506  507510 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0116 03:43:55.914158  507510 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0116 03:43:55.914171  507510 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0116 03:43:55.926953  507510 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0116 03:43:55.927034  507510 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0116 03:43:55.927137  507510 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0116 03:43:55.927186  507510 cache_images.go:92] LoadImages completed in 720.646435ms
	W0116 03:43:55.927280  507510 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0: no such file or directory
	I0116 03:43:55.927362  507510 ssh_runner.go:195] Run: crio config
	I0116 03:43:55.989408  507510 cni.go:84] Creating CNI manager for ""
	I0116 03:43:55.989440  507510 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:43:55.989468  507510 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 03:43:55.989495  507510 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.167 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-696770 NodeName:old-k8s-version-696770 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.167"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.167 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0116 03:43:55.989657  507510 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.167
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-696770"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.167
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.167"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-696770
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.61.167:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 03:43:55.989757  507510 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-696770 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.167
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-696770 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0116 03:43:55.989819  507510 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0116 03:43:55.999676  507510 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 03:43:55.999766  507510 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0116 03:43:56.009179  507510 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0116 03:43:56.028479  507510 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0116 03:43:56.045979  507510 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2181 bytes)
	I0116 03:43:56.067179  507510 ssh_runner.go:195] Run: grep 192.168.61.167	control-plane.minikube.internal$ /etc/hosts
	I0116 03:43:56.071532  507510 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.167	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 03:43:56.085960  507510 certs.go:56] Setting up /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/old-k8s-version-696770 for IP: 192.168.61.167
	I0116 03:43:56.086006  507510 certs.go:190] acquiring lock for shared ca certs: {Name:mkab5aeea3e0ed69f3592dc8f418e07c6075b130 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:43:56.086216  507510 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17965-468241/.minikube/ca.key
	I0116 03:43:56.086293  507510 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.key
	I0116 03:43:56.086385  507510 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/old-k8s-version-696770/client.key
	I0116 03:43:56.086447  507510 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/old-k8s-version-696770/apiserver.key.1a2d2382
	I0116 03:43:56.086480  507510 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/old-k8s-version-696770/proxy-client.key
	I0116 03:43:56.086668  507510 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478.pem (1338 bytes)
	W0116 03:43:56.086711  507510 certs.go:433] ignoring /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478_empty.pem, impossibly tiny 0 bytes
	I0116 03:43:56.086721  507510 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca-key.pem (1679 bytes)
	I0116 03:43:56.086746  507510 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem (1078 bytes)
	I0116 03:43:56.086772  507510 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem (1123 bytes)
	I0116 03:43:56.086795  507510 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem (1679 bytes)
	I0116 03:43:56.086833  507510 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem (1708 bytes)
	I0116 03:43:56.087557  507510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/old-k8s-version-696770/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0116 03:43:56.118148  507510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/old-k8s-version-696770/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0116 03:43:56.146632  507510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/old-k8s-version-696770/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0116 03:43:56.177146  507510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/old-k8s-version-696770/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0116 03:43:56.208800  507510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 03:43:56.237097  507510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0116 03:43:56.264559  507510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 03:43:56.294383  507510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 03:43:56.323966  507510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478.pem --> /usr/share/ca-certificates/475478.pem (1338 bytes)
	I0116 03:43:56.350120  507510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem --> /usr/share/ca-certificates/4754782.pem (1708 bytes)
	I0116 03:43:56.379523  507510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 03:43:56.406312  507510 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0116 03:43:56.426149  507510 ssh_runner.go:195] Run: openssl version
	I0116 03:43:56.432150  507510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 03:43:56.443200  507510 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:43:56.448268  507510 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 02:35 /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:43:56.448343  507510 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:43:56.454227  507510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 03:43:56.464467  507510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/475478.pem && ln -fs /usr/share/ca-certificates/475478.pem /etc/ssl/certs/475478.pem"
	I0116 03:43:56.474769  507510 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/475478.pem
	I0116 03:43:56.480143  507510 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 02:44 /usr/share/ca-certificates/475478.pem
	I0116 03:43:56.480228  507510 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/475478.pem
	I0116 03:43:56.487996  507510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/475478.pem /etc/ssl/certs/51391683.0"
	I0116 03:43:56.501097  507510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4754782.pem && ln -fs /usr/share/ca-certificates/4754782.pem /etc/ssl/certs/4754782.pem"
	I0116 03:43:56.513266  507510 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4754782.pem
	I0116 03:43:56.518806  507510 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 02:44 /usr/share/ca-certificates/4754782.pem
	I0116 03:43:56.518891  507510 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4754782.pem
	I0116 03:43:56.527891  507510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4754782.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 03:43:56.538719  507510 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 03:43:56.544298  507510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0116 03:43:56.551048  507510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0116 03:43:56.557847  507510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0116 03:43:56.567757  507510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0116 03:43:56.575977  507510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0116 03:43:56.584514  507510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0116 03:43:56.593191  507510 kubeadm.go:404] StartCluster: {Name:old-k8s-version-696770 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-696770 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.167 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 03:43:56.593333  507510 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0116 03:43:56.593408  507510 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 03:43:56.653791  507510 cri.go:89] found id: ""
	I0116 03:43:56.653899  507510 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0116 03:43:56.667037  507510 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0116 03:43:56.667078  507510 kubeadm.go:636] restartCluster start
	I0116 03:43:56.667164  507510 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0116 03:43:56.679734  507510 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:56.681241  507510 kubeconfig.go:92] found "old-k8s-version-696770" server: "https://192.168.61.167:8443"
	I0116 03:43:56.683942  507510 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0116 03:43:56.696409  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:43:56.696507  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:56.713120  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:57.196652  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:43:57.196826  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:57.213992  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:57.697096  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:43:57.697197  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:57.709671  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:58.197291  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:43:58.197401  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:58.214351  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:58.696893  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:43:58.697036  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:58.714549  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:59.197173  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:43:59.197304  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:59.213885  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:56.773238  507339 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.280261968s)
	I0116 03:43:56.773267  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:43:57.046716  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:43:57.123831  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:43:57.221179  507339 api_server.go:52] waiting for apiserver process to appear ...
	I0116 03:43:57.221300  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:43:57.721940  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:43:58.222437  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:43:58.722256  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:43:59.222191  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:43:59.721451  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:43:59.753520  507339 api_server.go:72] duration metric: took 2.532341035s to wait for apiserver process to appear ...
	I0116 03:43:59.753556  507339 api_server.go:88] waiting for apiserver healthz status ...
	I0116 03:43:59.753601  507339 api_server.go:253] Checking apiserver healthz at https://192.168.39.103:8443/healthz ...
	I0116 03:43:59.754176  507339 api_server.go:269] stopped: https://192.168.39.103:8443/healthz: Get "https://192.168.39.103:8443/healthz": dial tcp 192.168.39.103:8443: connect: connection refused
	I0116 03:44:00.253773  507339 api_server.go:253] Checking apiserver healthz at https://192.168.39.103:8443/healthz ...
	I0116 03:43:57.000501  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:43:57.070966  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | unable to find current IP address of domain default-k8s-diff-port-434445 in network mk-default-k8s-diff-port-434445
	I0116 03:43:57.071015  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | I0116 03:43:57.000987  508579 retry.go:31] will retry after 1.963105502s: waiting for machine to come up
	I0116 03:43:58.966528  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:43:58.967131  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | unable to find current IP address of domain default-k8s-diff-port-434445 in network mk-default-k8s-diff-port-434445
	I0116 03:43:58.967173  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | I0116 03:43:58.967069  508579 retry.go:31] will retry after 2.871455928s: waiting for machine to come up
	I0116 03:43:59.697215  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:43:59.697303  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:59.713992  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:00.196535  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:44:00.196649  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:00.212663  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:00.697276  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:44:00.697390  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:00.714622  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:01.197125  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:44:01.197242  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:01.214976  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:01.696506  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:44:01.696612  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:01.708204  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:02.197402  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:44:02.197519  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:02.211062  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:02.697230  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:44:02.697358  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:02.710340  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:03.196949  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:44:03.197047  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:03.213169  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:03.696657  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:44:03.696793  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:03.709422  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:04.196970  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:44:04.197083  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:04.209280  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:03.473725  507339 api_server.go:279] https://192.168.39.103:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 03:44:03.473764  507339 api_server.go:103] status: https://192.168.39.103:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 03:44:03.473784  507339 api_server.go:253] Checking apiserver healthz at https://192.168.39.103:8443/healthz ...
	I0116 03:44:03.531825  507339 api_server.go:279] https://192.168.39.103:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 03:44:03.531873  507339 api_server.go:103] status: https://192.168.39.103:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 03:44:03.754148  507339 api_server.go:253] Checking apiserver healthz at https://192.168.39.103:8443/healthz ...
	I0116 03:44:03.759138  507339 api_server.go:279] https://192.168.39.103:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 03:44:03.759171  507339 api_server.go:103] status: https://192.168.39.103:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 03:44:04.254321  507339 api_server.go:253] Checking apiserver healthz at https://192.168.39.103:8443/healthz ...
	I0116 03:44:04.259317  507339 api_server.go:279] https://192.168.39.103:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 03:44:04.259350  507339 api_server.go:103] status: https://192.168.39.103:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 03:44:04.753890  507339 api_server.go:253] Checking apiserver healthz at https://192.168.39.103:8443/healthz ...
	I0116 03:44:04.759714  507339 api_server.go:279] https://192.168.39.103:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 03:44:04.759747  507339 api_server.go:103] status: https://192.168.39.103:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 03:44:05.254582  507339 api_server.go:253] Checking apiserver healthz at https://192.168.39.103:8443/healthz ...
	I0116 03:44:05.264904  507339 api_server.go:279] https://192.168.39.103:8443/healthz returned 200:
	ok
	I0116 03:44:05.283700  507339 api_server.go:141] control plane version: v1.29.0-rc.2
	I0116 03:44:05.283737  507339 api_server.go:131] duration metric: took 5.53017208s to wait for apiserver health ...
	I0116 03:44:05.283749  507339 cni.go:84] Creating CNI manager for ""
	I0116 03:44:05.283757  507339 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:44:05.285715  507339 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 03:44:05.287393  507339 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 03:44:05.327883  507339 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 03:44:05.371856  507339 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 03:44:05.382614  507339 system_pods.go:59] 8 kube-system pods found
	I0116 03:44:05.382656  507339 system_pods.go:61] "coredns-76f75df574-lr95b" [15dc0b11-f7ec-4729-bbfa-79b9649fbad6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0116 03:44:05.382666  507339 system_pods.go:61] "etcd-no-preload-666547" [f98dcc57-5869-4a35-b443-2e6c9beddd88] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0116 03:44:05.382682  507339 system_pods.go:61] "kube-apiserver-no-preload-666547" [3aceae9d-224a-4e8c-a02d-ad4199d8d558] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0116 03:44:05.382699  507339 system_pods.go:61] "kube-controller-manager-no-preload-666547" [af39c89c-3b41-4d6f-b075-16de04a3ecc0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0116 03:44:05.382706  507339 system_pods.go:61] "kube-proxy-dcmrn" [1e91c96f-cbc5-424d-a09e-06e34bf7a2e2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0116 03:44:05.382714  507339 system_pods.go:61] "kube-scheduler-no-preload-666547" [0f2aa713-aebc-4858-a348-32169021235e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0116 03:44:05.382723  507339 system_pods.go:61] "metrics-server-57f55c9bc5-78vfj" [dbd2d3b2-ec0f-4253-8549-7c4299522c37] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:44:05.382735  507339 system_pods.go:61] "storage-provisioner" [f4e1ba45-217d-41d5-b583-2f60044879bc] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0116 03:44:05.382749  507339 system_pods.go:74] duration metric: took 10.858851ms to wait for pod list to return data ...
	I0116 03:44:05.382760  507339 node_conditions.go:102] verifying NodePressure condition ...
	I0116 03:44:05.391050  507339 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 03:44:05.391112  507339 node_conditions.go:123] node cpu capacity is 2
	I0116 03:44:05.391128  507339 node_conditions.go:105] duration metric: took 8.361426ms to run NodePressure ...
	I0116 03:44:05.391152  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:44:01.840907  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:01.841317  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | unable to find current IP address of domain default-k8s-diff-port-434445 in network mk-default-k8s-diff-port-434445
	I0116 03:44:01.841361  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | I0116 03:44:01.841259  508579 retry.go:31] will retry after 3.769759015s: waiting for machine to come up
	I0116 03:44:05.613594  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:05.614119  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | unable to find current IP address of domain default-k8s-diff-port-434445 in network mk-default-k8s-diff-port-434445
	I0116 03:44:05.614149  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | I0116 03:44:05.614062  508579 retry.go:31] will retry after 3.5833584s: waiting for machine to come up
	I0116 03:44:05.740205  507339 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0116 03:44:05.745269  507339 kubeadm.go:787] kubelet initialised
	I0116 03:44:05.745297  507339 kubeadm.go:788] duration metric: took 5.059802ms waiting for restarted kubelet to initialise ...
	I0116 03:44:05.745306  507339 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:44:05.751403  507339 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-lr95b" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:05.761740  507339 pod_ready.go:97] node "no-preload-666547" hosting pod "coredns-76f75df574-lr95b" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-666547" has status "Ready":"False"
	I0116 03:44:05.761784  507339 pod_ready.go:81] duration metric: took 10.344994ms waiting for pod "coredns-76f75df574-lr95b" in "kube-system" namespace to be "Ready" ...
	E0116 03:44:05.761796  507339 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-666547" hosting pod "coredns-76f75df574-lr95b" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-666547" has status "Ready":"False"
	I0116 03:44:05.761812  507339 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-666547" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:05.767627  507339 pod_ready.go:97] node "no-preload-666547" hosting pod "etcd-no-preload-666547" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-666547" has status "Ready":"False"
	I0116 03:44:05.767657  507339 pod_ready.go:81] duration metric: took 5.831478ms waiting for pod "etcd-no-preload-666547" in "kube-system" namespace to be "Ready" ...
	E0116 03:44:05.767669  507339 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-666547" hosting pod "etcd-no-preload-666547" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-666547" has status "Ready":"False"
	I0116 03:44:05.767677  507339 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-666547" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:05.772833  507339 pod_ready.go:97] node "no-preload-666547" hosting pod "kube-apiserver-no-preload-666547" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-666547" has status "Ready":"False"
	I0116 03:44:05.772863  507339 pod_ready.go:81] duration metric: took 5.17797ms waiting for pod "kube-apiserver-no-preload-666547" in "kube-system" namespace to be "Ready" ...
	E0116 03:44:05.772876  507339 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-666547" hosting pod "kube-apiserver-no-preload-666547" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-666547" has status "Ready":"False"
	I0116 03:44:05.772884  507339 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-666547" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:05.779234  507339 pod_ready.go:97] node "no-preload-666547" hosting pod "kube-controller-manager-no-preload-666547" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-666547" has status "Ready":"False"
	I0116 03:44:05.779259  507339 pod_ready.go:81] duration metric: took 6.362264ms waiting for pod "kube-controller-manager-no-preload-666547" in "kube-system" namespace to be "Ready" ...
	E0116 03:44:05.779270  507339 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-666547" hosting pod "kube-controller-manager-no-preload-666547" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-666547" has status "Ready":"False"
	I0116 03:44:05.779277  507339 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-dcmrn" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:06.175807  507339 pod_ready.go:97] node "no-preload-666547" hosting pod "kube-proxy-dcmrn" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-666547" has status "Ready":"False"
	I0116 03:44:06.175846  507339 pod_ready.go:81] duration metric: took 396.551923ms waiting for pod "kube-proxy-dcmrn" in "kube-system" namespace to be "Ready" ...
	E0116 03:44:06.175859  507339 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-666547" hosting pod "kube-proxy-dcmrn" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-666547" has status "Ready":"False"
	I0116 03:44:06.175867  507339 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-666547" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:06.580068  507339 pod_ready.go:97] node "no-preload-666547" hosting pod "kube-scheduler-no-preload-666547" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-666547" has status "Ready":"False"
	I0116 03:44:06.580102  507339 pod_ready.go:81] duration metric: took 404.226447ms waiting for pod "kube-scheduler-no-preload-666547" in "kube-system" namespace to be "Ready" ...
	E0116 03:44:06.580119  507339 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-666547" hosting pod "kube-scheduler-no-preload-666547" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-666547" has status "Ready":"False"
	I0116 03:44:06.580128  507339 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:06.976542  507339 pod_ready.go:97] node "no-preload-666547" hosting pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-666547" has status "Ready":"False"
	I0116 03:44:06.976573  507339 pod_ready.go:81] duration metric: took 396.432925ms waiting for pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace to be "Ready" ...
	E0116 03:44:06.976590  507339 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-666547" hosting pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-666547" has status "Ready":"False"
	I0116 03:44:06.976596  507339 pod_ready.go:38] duration metric: took 1.231281598s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:44:06.976621  507339 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0116 03:44:06.988884  507339 ops.go:34] apiserver oom_adj: -16
	I0116 03:44:06.988916  507339 kubeadm.go:640] restartCluster took 21.755069193s
	I0116 03:44:06.988940  507339 kubeadm.go:406] StartCluster complete in 21.811388098s
	I0116 03:44:06.988970  507339 settings.go:142] acquiring lock: {Name:mk3ded99c06d285b174e76622bb0741c8dd2d8fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:44:06.989066  507339 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17965-468241/kubeconfig
	I0116 03:44:06.990912  507339 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-468241/kubeconfig: {Name:mkac3c63bbcba9b4fa1bea3480797757d703fb9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:44:06.991191  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0116 03:44:06.991241  507339 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0116 03:44:06.991341  507339 addons.go:69] Setting storage-provisioner=true in profile "no-preload-666547"
	I0116 03:44:06.991362  507339 addons.go:234] Setting addon storage-provisioner=true in "no-preload-666547"
	I0116 03:44:06.991364  507339 addons.go:69] Setting default-storageclass=true in profile "no-preload-666547"
	W0116 03:44:06.991370  507339 addons.go:243] addon storage-provisioner should already be in state true
	I0116 03:44:06.991388  507339 addons.go:69] Setting metrics-server=true in profile "no-preload-666547"
	I0116 03:44:06.991397  507339 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-666547"
	I0116 03:44:06.991404  507339 addons.go:234] Setting addon metrics-server=true in "no-preload-666547"
	W0116 03:44:06.991412  507339 addons.go:243] addon metrics-server should already be in state true
	I0116 03:44:06.991438  507339 host.go:66] Checking if "no-preload-666547" exists ...
	I0116 03:44:06.991451  507339 host.go:66] Checking if "no-preload-666547" exists ...
	I0116 03:44:06.991460  507339 config.go:182] Loaded profile config "no-preload-666547": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0116 03:44:06.991855  507339 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:44:06.991855  507339 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:44:06.991893  507339 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:44:06.991858  507339 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:44:06.991940  507339 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:44:06.991976  507339 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:44:06.998037  507339 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-666547" context rescaled to 1 replicas
	I0116 03:44:06.998086  507339 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.103 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 03:44:07.000312  507339 out.go:177] * Verifying Kubernetes components...
	I0116 03:44:07.001889  507339 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:44:07.009057  507339 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41459
	I0116 03:44:07.009097  507339 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38693
	I0116 03:44:07.009596  507339 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:44:07.009735  507339 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:44:07.010178  507339 main.go:141] libmachine: Using API Version  1
	I0116 03:44:07.010195  507339 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:44:07.010368  507339 main.go:141] libmachine: Using API Version  1
	I0116 03:44:07.010392  507339 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:44:07.010412  507339 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33905
	I0116 03:44:07.010763  507339 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:44:07.010822  507339 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:44:07.010829  507339 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:44:07.010945  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetState
	I0116 03:44:07.011314  507339 main.go:141] libmachine: Using API Version  1
	I0116 03:44:07.011346  507339 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:44:07.011955  507339 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:44:07.011956  507339 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:44:07.012052  507339 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:44:07.012511  507339 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:44:07.012547  507339 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:44:07.015214  507339 addons.go:234] Setting addon default-storageclass=true in "no-preload-666547"
	W0116 03:44:07.015237  507339 addons.go:243] addon default-storageclass should already be in state true
	I0116 03:44:07.015269  507339 host.go:66] Checking if "no-preload-666547" exists ...
	I0116 03:44:07.015718  507339 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:44:07.015772  507339 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:44:07.029747  507339 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32807
	I0116 03:44:07.029990  507339 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35757
	I0116 03:44:07.030392  507339 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:44:07.030448  507339 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:44:07.030948  507339 main.go:141] libmachine: Using API Version  1
	I0116 03:44:07.030970  507339 main.go:141] libmachine: Using API Version  1
	I0116 03:44:07.030986  507339 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:44:07.031046  507339 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:44:07.031393  507339 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:44:07.031443  507339 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:44:07.031603  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetState
	I0116 03:44:07.031660  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetState
	I0116 03:44:07.033898  507339 main.go:141] libmachine: (no-preload-666547) Calling .DriverName
	I0116 03:44:07.033990  507339 main.go:141] libmachine: (no-preload-666547) Calling .DriverName
	I0116 03:44:07.036581  507339 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0116 03:44:07.034407  507339 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38687
	I0116 03:44:07.038382  507339 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0116 03:44:07.038420  507339 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0116 03:44:07.038444  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHHostname
	I0116 03:44:07.038499  507339 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:44:07.040190  507339 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 03:44:07.040211  507339 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0116 03:44:07.040232  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHHostname
	I0116 03:44:07.039010  507339 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:44:07.040908  507339 main.go:141] libmachine: Using API Version  1
	I0116 03:44:07.040931  507339 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:44:07.041538  507339 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:44:07.042268  507339 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:44:07.042319  507339 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:44:07.043270  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:44:07.043665  507339 main.go:141] libmachine: (no-preload-666547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:5f:03", ip: ""} in network mk-no-preload-666547: {Iface:virbr2 ExpiryTime:2024-01-16 04:43:15 +0000 UTC Type:0 Mac:52:54:00:4e:5f:03 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:no-preload-666547 Clientid:01:52:54:00:4e:5f:03}
	I0116 03:44:07.043697  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined IP address 192.168.39.103 and MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:44:07.043730  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:44:07.043966  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHPort
	I0116 03:44:07.044196  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHKeyPath
	I0116 03:44:07.044381  507339 main.go:141] libmachine: (no-preload-666547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:5f:03", ip: ""} in network mk-no-preload-666547: {Iface:virbr2 ExpiryTime:2024-01-16 04:43:15 +0000 UTC Type:0 Mac:52:54:00:4e:5f:03 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:no-preload-666547 Clientid:01:52:54:00:4e:5f:03}
	I0116 03:44:07.044422  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined IP address 192.168.39.103 and MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:44:07.044456  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHUsername
	I0116 03:44:07.044566  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHPort
	I0116 03:44:07.044691  507339 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/no-preload-666547/id_rsa Username:docker}
	I0116 03:44:07.044716  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHKeyPath
	I0116 03:44:07.044878  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHUsername
	I0116 03:44:07.045028  507339 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/no-preload-666547/id_rsa Username:docker}
	I0116 03:44:07.084507  507339 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46191
	I0116 03:44:07.085014  507339 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:44:07.085601  507339 main.go:141] libmachine: Using API Version  1
	I0116 03:44:07.085636  507339 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:44:07.086005  507339 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:44:07.086202  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetState
	I0116 03:44:07.088199  507339 main.go:141] libmachine: (no-preload-666547) Calling .DriverName
	I0116 03:44:07.088513  507339 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0116 03:44:07.088532  507339 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0116 03:44:07.088555  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHHostname
	I0116 03:44:07.092194  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:44:07.092719  507339 main.go:141] libmachine: (no-preload-666547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:5f:03", ip: ""} in network mk-no-preload-666547: {Iface:virbr2 ExpiryTime:2024-01-16 04:43:15 +0000 UTC Type:0 Mac:52:54:00:4e:5f:03 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:no-preload-666547 Clientid:01:52:54:00:4e:5f:03}
	I0116 03:44:07.092745  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined IP address 192.168.39.103 and MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:44:07.092953  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHPort
	I0116 03:44:07.093219  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHKeyPath
	I0116 03:44:07.093384  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHUsername
	I0116 03:44:07.093590  507339 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/no-preload-666547/id_rsa Username:docker}
	I0116 03:44:07.196191  507339 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0116 03:44:07.196219  507339 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0116 03:44:07.201036  507339 node_ready.go:35] waiting up to 6m0s for node "no-preload-666547" to be "Ready" ...
	I0116 03:44:07.201055  507339 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0116 03:44:07.222924  507339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0116 03:44:07.224548  507339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 03:44:07.237091  507339 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0116 03:44:07.237119  507339 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0116 03:44:07.289312  507339 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 03:44:07.289342  507339 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0116 03:44:07.334708  507339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 03:44:07.583740  507339 main.go:141] libmachine: Making call to close driver server
	I0116 03:44:07.583773  507339 main.go:141] libmachine: (no-preload-666547) Calling .Close
	I0116 03:44:07.584079  507339 main.go:141] libmachine: (no-preload-666547) DBG | Closing plugin on server side
	I0116 03:44:07.584135  507339 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:44:07.584146  507339 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:44:07.584155  507339 main.go:141] libmachine: Making call to close driver server
	I0116 03:44:07.584170  507339 main.go:141] libmachine: (no-preload-666547) Calling .Close
	I0116 03:44:07.584405  507339 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:44:07.584423  507339 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:44:07.592304  507339 main.go:141] libmachine: Making call to close driver server
	I0116 03:44:07.592332  507339 main.go:141] libmachine: (no-preload-666547) Calling .Close
	I0116 03:44:07.592608  507339 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:44:07.592656  507339 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:44:07.592663  507339 main.go:141] libmachine: (no-preload-666547) DBG | Closing plugin on server side
	I0116 03:44:08.290558  507339 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.065965685s)
	I0116 03:44:08.290643  507339 main.go:141] libmachine: Making call to close driver server
	I0116 03:44:08.290665  507339 main.go:141] libmachine: (no-preload-666547) Calling .Close
	I0116 03:44:08.291042  507339 main.go:141] libmachine: (no-preload-666547) DBG | Closing plugin on server side
	I0116 03:44:08.291103  507339 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:44:08.291121  507339 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:44:08.291136  507339 main.go:141] libmachine: Making call to close driver server
	I0116 03:44:08.291147  507339 main.go:141] libmachine: (no-preload-666547) Calling .Close
	I0116 03:44:08.291380  507339 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:44:08.291396  507339 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:44:08.291416  507339 main.go:141] libmachine: (no-preload-666547) DBG | Closing plugin on server side
	I0116 03:44:08.468146  507339 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.133348135s)
	I0116 03:44:08.468223  507339 main.go:141] libmachine: Making call to close driver server
	I0116 03:44:08.468248  507339 main.go:141] libmachine: (no-preload-666547) Calling .Close
	I0116 03:44:08.470360  507339 main.go:141] libmachine: (no-preload-666547) DBG | Closing plugin on server side
	I0116 03:44:08.470367  507339 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:44:08.470397  507339 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:44:08.470412  507339 main.go:141] libmachine: Making call to close driver server
	I0116 03:44:08.470423  507339 main.go:141] libmachine: (no-preload-666547) Calling .Close
	I0116 03:44:08.470734  507339 main.go:141] libmachine: (no-preload-666547) DBG | Closing plugin on server side
	I0116 03:44:08.470749  507339 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:44:08.470764  507339 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:44:08.470776  507339 addons.go:470] Verifying addon metrics-server=true in "no-preload-666547"
	I0116 03:44:08.473092  507339 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0116 03:44:04.697359  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:44:04.697510  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:04.714690  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:05.197225  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:44:05.197333  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:05.213923  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:05.696541  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:44:05.696632  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:05.713744  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:06.197249  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:44:06.197369  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:06.209148  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:06.696967  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:44:06.697083  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:06.709624  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:06.709656  507510 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0116 03:44:06.709665  507510 kubeadm.go:1135] stopping kube-system containers ...
	I0116 03:44:06.709676  507510 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0116 03:44:06.709736  507510 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 03:44:06.753286  507510 cri.go:89] found id: ""
	I0116 03:44:06.753370  507510 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0116 03:44:06.769990  507510 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 03:44:06.781090  507510 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 03:44:06.781168  507510 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 03:44:06.790936  507510 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0116 03:44:06.790971  507510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:44:06.915790  507510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:44:08.112494  507510 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.196668404s)
	I0116 03:44:08.112528  507510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:44:08.328365  507510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:44:08.435410  507510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:44:08.576950  507510 api_server.go:52] waiting for apiserver process to appear ...
	I0116 03:44:08.577077  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:44:09.077263  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:44:08.474544  507339 addons.go:505] enable addons completed in 1.483307386s: enabled=[default-storageclass storage-provisioner metrics-server]
	I0116 03:44:09.206584  507339 node_ready.go:58] node "no-preload-666547" has status "Ready":"False"
	I0116 03:44:10.997580  507257 start.go:369] acquired machines lock for "embed-certs-615980" in 1m2.194717115s
	I0116 03:44:10.997669  507257 start.go:96] Skipping create...Using existing machine configuration
	I0116 03:44:10.997681  507257 fix.go:54] fixHost starting: 
	I0116 03:44:10.998101  507257 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:44:10.998135  507257 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:44:11.017060  507257 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38343
	I0116 03:44:11.017687  507257 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:44:11.018295  507257 main.go:141] libmachine: Using API Version  1
	I0116 03:44:11.018326  507257 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:44:11.018673  507257 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:44:11.018879  507257 main.go:141] libmachine: (embed-certs-615980) Calling .DriverName
	I0116 03:44:11.019056  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetState
	I0116 03:44:11.021360  507257 fix.go:102] recreateIfNeeded on embed-certs-615980: state=Stopped err=<nil>
	I0116 03:44:11.021396  507257 main.go:141] libmachine: (embed-certs-615980) Calling .DriverName
	W0116 03:44:11.021577  507257 fix.go:128] unexpected machine state, will restart: <nil>
	I0116 03:44:11.023462  507257 out.go:177] * Restarting existing kvm2 VM for "embed-certs-615980" ...
	I0116 03:44:11.025158  507257 main.go:141] libmachine: (embed-certs-615980) Calling .Start
	I0116 03:44:11.025397  507257 main.go:141] libmachine: (embed-certs-615980) Ensuring networks are active...
	I0116 03:44:11.026354  507257 main.go:141] libmachine: (embed-certs-615980) Ensuring network default is active
	I0116 03:44:11.026830  507257 main.go:141] libmachine: (embed-certs-615980) Ensuring network mk-embed-certs-615980 is active
	I0116 03:44:11.027263  507257 main.go:141] libmachine: (embed-certs-615980) Getting domain xml...
	I0116 03:44:11.028182  507257 main.go:141] libmachine: (embed-certs-615980) Creating domain...
	I0116 03:44:09.198824  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:09.199284  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Found IP for machine: 192.168.50.236
	I0116 03:44:09.199318  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Reserving static IP address...
	I0116 03:44:09.199348  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has current primary IP address 192.168.50.236 and MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:09.199756  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-434445", mac: "52:54:00:78:ea:d5", ip: "192.168.50.236"} in network mk-default-k8s-diff-port-434445: {Iface:virbr1 ExpiryTime:2024-01-16 04:44:01 +0000 UTC Type:0 Mac:52:54:00:78:ea:d5 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:default-k8s-diff-port-434445 Clientid:01:52:54:00:78:ea:d5}
	I0116 03:44:09.199781  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | skip adding static IP to network mk-default-k8s-diff-port-434445 - found existing host DHCP lease matching {name: "default-k8s-diff-port-434445", mac: "52:54:00:78:ea:d5", ip: "192.168.50.236"}
	I0116 03:44:09.199794  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Reserved static IP address: 192.168.50.236
	I0116 03:44:09.199808  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for SSH to be available...
	I0116 03:44:09.199832  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | Getting to WaitForSSH function...
	I0116 03:44:09.202093  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:09.202494  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ea:d5", ip: ""} in network mk-default-k8s-diff-port-434445: {Iface:virbr1 ExpiryTime:2024-01-16 04:44:01 +0000 UTC Type:0 Mac:52:54:00:78:ea:d5 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:default-k8s-diff-port-434445 Clientid:01:52:54:00:78:ea:d5}
	I0116 03:44:09.202529  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined IP address 192.168.50.236 and MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:09.202664  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | Using SSH client type: external
	I0116 03:44:09.202690  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | Using SSH private key: /home/jenkins/minikube-integration/17965-468241/.minikube/machines/default-k8s-diff-port-434445/id_rsa (-rw-------)
	I0116 03:44:09.202723  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.236 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17965-468241/.minikube/machines/default-k8s-diff-port-434445/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0116 03:44:09.202746  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | About to run SSH command:
	I0116 03:44:09.202763  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | exit 0
	I0116 03:44:09.302425  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | SSH cmd err, output: <nil>: 
	I0116 03:44:09.302867  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetConfigRaw
	I0116 03:44:09.303666  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetIP
	I0116 03:44:09.306482  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:09.306884  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ea:d5", ip: ""} in network mk-default-k8s-diff-port-434445: {Iface:virbr1 ExpiryTime:2024-01-16 04:44:01 +0000 UTC Type:0 Mac:52:54:00:78:ea:d5 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:default-k8s-diff-port-434445 Clientid:01:52:54:00:78:ea:d5}
	I0116 03:44:09.306920  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined IP address 192.168.50.236 and MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:09.307189  507889 profile.go:148] Saving config to /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/default-k8s-diff-port-434445/config.json ...
	I0116 03:44:09.307418  507889 machine.go:88] provisioning docker machine ...
	I0116 03:44:09.307437  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .DriverName
	I0116 03:44:09.307673  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetMachineName
	I0116 03:44:09.307865  507889 buildroot.go:166] provisioning hostname "default-k8s-diff-port-434445"
	I0116 03:44:09.307886  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetMachineName
	I0116 03:44:09.308073  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHHostname
	I0116 03:44:09.310375  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:09.310726  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ea:d5", ip: ""} in network mk-default-k8s-diff-port-434445: {Iface:virbr1 ExpiryTime:2024-01-16 04:44:01 +0000 UTC Type:0 Mac:52:54:00:78:ea:d5 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:default-k8s-diff-port-434445 Clientid:01:52:54:00:78:ea:d5}
	I0116 03:44:09.310765  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined IP address 192.168.50.236 and MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:09.310920  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHPort
	I0116 03:44:09.311111  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHKeyPath
	I0116 03:44:09.311231  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHKeyPath
	I0116 03:44:09.311384  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHUsername
	I0116 03:44:09.311528  507889 main.go:141] libmachine: Using SSH client type: native
	I0116 03:44:09.311932  507889 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.50.236 22 <nil> <nil>}
	I0116 03:44:09.311949  507889 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-434445 && echo "default-k8s-diff-port-434445" | sudo tee /etc/hostname
	I0116 03:44:09.469340  507889 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-434445
	
	I0116 03:44:09.469384  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHHostname
	I0116 03:44:09.472788  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:09.473108  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ea:d5", ip: ""} in network mk-default-k8s-diff-port-434445: {Iface:virbr1 ExpiryTime:2024-01-16 04:44:01 +0000 UTC Type:0 Mac:52:54:00:78:ea:d5 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:default-k8s-diff-port-434445 Clientid:01:52:54:00:78:ea:d5}
	I0116 03:44:09.473166  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined IP address 192.168.50.236 and MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:09.473353  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHPort
	I0116 03:44:09.473571  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHKeyPath
	I0116 03:44:09.473768  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHKeyPath
	I0116 03:44:09.473963  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHUsername
	I0116 03:44:09.474171  507889 main.go:141] libmachine: Using SSH client type: native
	I0116 03:44:09.474626  507889 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.50.236 22 <nil> <nil>}
	I0116 03:44:09.474657  507889 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-434445' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-434445/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-434445' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 03:44:09.622177  507889 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 03:44:09.622223  507889 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17965-468241/.minikube CaCertPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17965-468241/.minikube}
	I0116 03:44:09.622253  507889 buildroot.go:174] setting up certificates
	I0116 03:44:09.622267  507889 provision.go:83] configureAuth start
	I0116 03:44:09.622280  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetMachineName
	I0116 03:44:09.622649  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetIP
	I0116 03:44:09.625970  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:09.626394  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ea:d5", ip: ""} in network mk-default-k8s-diff-port-434445: {Iface:virbr1 ExpiryTime:2024-01-16 04:44:01 +0000 UTC Type:0 Mac:52:54:00:78:ea:d5 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:default-k8s-diff-port-434445 Clientid:01:52:54:00:78:ea:d5}
	I0116 03:44:09.626429  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined IP address 192.168.50.236 and MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:09.626603  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHHostname
	I0116 03:44:09.629623  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:09.630022  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ea:d5", ip: ""} in network mk-default-k8s-diff-port-434445: {Iface:virbr1 ExpiryTime:2024-01-16 04:44:01 +0000 UTC Type:0 Mac:52:54:00:78:ea:d5 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:default-k8s-diff-port-434445 Clientid:01:52:54:00:78:ea:d5}
	I0116 03:44:09.630052  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined IP address 192.168.50.236 and MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:09.630263  507889 provision.go:138] copyHostCerts
	I0116 03:44:09.630354  507889 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem, removing ...
	I0116 03:44:09.630370  507889 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem
	I0116 03:44:09.630449  507889 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem (1078 bytes)
	I0116 03:44:09.630603  507889 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem, removing ...
	I0116 03:44:09.630626  507889 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem
	I0116 03:44:09.630661  507889 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem (1123 bytes)
	I0116 03:44:09.630760  507889 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem, removing ...
	I0116 03:44:09.630775  507889 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem
	I0116 03:44:09.630805  507889 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem (1679 bytes)
	I0116 03:44:09.630891  507889 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-434445 san=[192.168.50.236 192.168.50.236 localhost 127.0.0.1 minikube default-k8s-diff-port-434445]
	I0116 03:44:10.127058  507889 provision.go:172] copyRemoteCerts
	I0116 03:44:10.127138  507889 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 03:44:10.127175  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHHostname
	I0116 03:44:10.130572  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:10.131095  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ea:d5", ip: ""} in network mk-default-k8s-diff-port-434445: {Iface:virbr1 ExpiryTime:2024-01-16 04:44:01 +0000 UTC Type:0 Mac:52:54:00:78:ea:d5 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:default-k8s-diff-port-434445 Clientid:01:52:54:00:78:ea:d5}
	I0116 03:44:10.131133  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined IP address 192.168.50.236 and MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:10.131313  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHPort
	I0116 03:44:10.131590  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHKeyPath
	I0116 03:44:10.131825  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHUsername
	I0116 03:44:10.132001  507889 sshutil.go:53] new ssh client: &{IP:192.168.50.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/default-k8s-diff-port-434445/id_rsa Username:docker}
	I0116 03:44:10.238263  507889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0116 03:44:10.269567  507889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0116 03:44:10.295065  507889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0116 03:44:10.323347  507889 provision.go:86] duration metric: configureAuth took 701.062063ms
	I0116 03:44:10.323391  507889 buildroot.go:189] setting minikube options for container-runtime
	I0116 03:44:10.323667  507889 config.go:182] Loaded profile config "default-k8s-diff-port-434445": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 03:44:10.323774  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHHostname
	I0116 03:44:10.326825  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:10.327222  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ea:d5", ip: ""} in network mk-default-k8s-diff-port-434445: {Iface:virbr1 ExpiryTime:2024-01-16 04:44:01 +0000 UTC Type:0 Mac:52:54:00:78:ea:d5 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:default-k8s-diff-port-434445 Clientid:01:52:54:00:78:ea:d5}
	I0116 03:44:10.327266  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined IP address 192.168.50.236 and MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:10.327423  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHPort
	I0116 03:44:10.327682  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHKeyPath
	I0116 03:44:10.327883  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHKeyPath
	I0116 03:44:10.328077  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHUsername
	I0116 03:44:10.328269  507889 main.go:141] libmachine: Using SSH client type: native
	I0116 03:44:10.328743  507889 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.50.236 22 <nil> <nil>}
	I0116 03:44:10.328778  507889 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 03:44:10.700188  507889 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 03:44:10.700221  507889 machine.go:91] provisioned docker machine in 1.392790129s
	I0116 03:44:10.700232  507889 start.go:300] post-start starting for "default-k8s-diff-port-434445" (driver="kvm2")
	I0116 03:44:10.700244  507889 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 03:44:10.700261  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .DriverName
	I0116 03:44:10.700745  507889 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 03:44:10.700786  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHHostname
	I0116 03:44:10.704466  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:10.705001  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ea:d5", ip: ""} in network mk-default-k8s-diff-port-434445: {Iface:virbr1 ExpiryTime:2024-01-16 04:44:01 +0000 UTC Type:0 Mac:52:54:00:78:ea:d5 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:default-k8s-diff-port-434445 Clientid:01:52:54:00:78:ea:d5}
	I0116 03:44:10.705045  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined IP address 192.168.50.236 and MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:10.705278  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHPort
	I0116 03:44:10.705503  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHKeyPath
	I0116 03:44:10.705735  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHUsername
	I0116 03:44:10.705912  507889 sshutil.go:53] new ssh client: &{IP:192.168.50.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/default-k8s-diff-port-434445/id_rsa Username:docker}
	I0116 03:44:10.807625  507889 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 03:44:10.813392  507889 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 03:44:10.813428  507889 filesync.go:126] Scanning /home/jenkins/minikube-integration/17965-468241/.minikube/addons for local assets ...
	I0116 03:44:10.813519  507889 filesync.go:126] Scanning /home/jenkins/minikube-integration/17965-468241/.minikube/files for local assets ...
	I0116 03:44:10.813596  507889 filesync.go:149] local asset: /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem -> 4754782.pem in /etc/ssl/certs
	I0116 03:44:10.813687  507889 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 03:44:10.824028  507889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem --> /etc/ssl/certs/4754782.pem (1708 bytes)
	I0116 03:44:10.853453  507889 start.go:303] post-start completed in 153.201453ms
	I0116 03:44:10.853493  507889 fix.go:56] fixHost completed within 22.144172966s
	I0116 03:44:10.853543  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHHostname
	I0116 03:44:10.856529  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:10.856907  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ea:d5", ip: ""} in network mk-default-k8s-diff-port-434445: {Iface:virbr1 ExpiryTime:2024-01-16 04:44:01 +0000 UTC Type:0 Mac:52:54:00:78:ea:d5 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:default-k8s-diff-port-434445 Clientid:01:52:54:00:78:ea:d5}
	I0116 03:44:10.856967  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined IP address 192.168.50.236 and MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:10.857185  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHPort
	I0116 03:44:10.857438  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHKeyPath
	I0116 03:44:10.857636  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHKeyPath
	I0116 03:44:10.857790  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHUsername
	I0116 03:44:10.857974  507889 main.go:141] libmachine: Using SSH client type: native
	I0116 03:44:10.858502  507889 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.50.236 22 <nil> <nil>}
	I0116 03:44:10.858528  507889 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0116 03:44:10.997398  507889 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705376650.933903671
	
	I0116 03:44:10.997426  507889 fix.go:206] guest clock: 1705376650.933903671
	I0116 03:44:10.997436  507889 fix.go:219] Guest: 2024-01-16 03:44:10.933903671 +0000 UTC Remote: 2024-01-16 03:44:10.853498317 +0000 UTC m=+234.302480786 (delta=80.405354ms)
	I0116 03:44:10.997464  507889 fix.go:190] guest clock delta is within tolerance: 80.405354ms
	I0116 03:44:10.997471  507889 start.go:83] releasing machines lock for "default-k8s-diff-port-434445", held for 22.288188395s
	I0116 03:44:10.997517  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .DriverName
	I0116 03:44:10.997857  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetIP
	I0116 03:44:11.001410  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:11.001814  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ea:d5", ip: ""} in network mk-default-k8s-diff-port-434445: {Iface:virbr1 ExpiryTime:2024-01-16 04:44:01 +0000 UTC Type:0 Mac:52:54:00:78:ea:d5 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:default-k8s-diff-port-434445 Clientid:01:52:54:00:78:ea:d5}
	I0116 03:44:11.001864  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined IP address 192.168.50.236 and MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:11.002016  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .DriverName
	I0116 03:44:11.002649  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .DriverName
	I0116 03:44:11.002923  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .DriverName
	I0116 03:44:11.003015  507889 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 03:44:11.003068  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHHostname
	I0116 03:44:11.003258  507889 ssh_runner.go:195] Run: cat /version.json
	I0116 03:44:11.003294  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHHostname
	I0116 03:44:11.006383  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:11.006699  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:11.006800  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ea:d5", ip: ""} in network mk-default-k8s-diff-port-434445: {Iface:virbr1 ExpiryTime:2024-01-16 04:44:01 +0000 UTC Type:0 Mac:52:54:00:78:ea:d5 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:default-k8s-diff-port-434445 Clientid:01:52:54:00:78:ea:d5}
	I0116 03:44:11.006850  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined IP address 192.168.50.236 and MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:11.007123  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHPort
	I0116 03:44:11.007230  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ea:d5", ip: ""} in network mk-default-k8s-diff-port-434445: {Iface:virbr1 ExpiryTime:2024-01-16 04:44:01 +0000 UTC Type:0 Mac:52:54:00:78:ea:d5 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:default-k8s-diff-port-434445 Clientid:01:52:54:00:78:ea:d5}
	I0116 03:44:11.007330  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined IP address 192.168.50.236 and MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:11.007353  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHKeyPath
	I0116 03:44:11.007378  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHPort
	I0116 03:44:11.007585  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHUsername
	I0116 03:44:11.007597  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHKeyPath
	I0116 03:44:11.007737  507889 sshutil.go:53] new ssh client: &{IP:192.168.50.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/default-k8s-diff-port-434445/id_rsa Username:docker}
	I0116 03:44:11.007795  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHUsername
	I0116 03:44:11.007980  507889 sshutil.go:53] new ssh client: &{IP:192.168.50.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/default-k8s-diff-port-434445/id_rsa Username:docker}
	I0116 03:44:11.139882  507889 ssh_runner.go:195] Run: systemctl --version
	I0116 03:44:11.147082  507889 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 03:44:11.317582  507889 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0116 03:44:11.324567  507889 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 03:44:11.324656  507889 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 03:44:11.348193  507889 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0116 03:44:11.348225  507889 start.go:475] detecting cgroup driver to use...
	I0116 03:44:11.348319  507889 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 03:44:11.367049  507889 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 03:44:11.386632  507889 docker.go:217] disabling cri-docker service (if available) ...
	I0116 03:44:11.386713  507889 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 03:44:11.409551  507889 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 03:44:11.424599  507889 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 03:44:11.586480  507889 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 03:44:11.733770  507889 docker.go:233] disabling docker service ...
	I0116 03:44:11.733855  507889 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 03:44:11.751184  507889 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 03:44:11.766970  507889 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 03:44:11.903645  507889 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 03:44:12.017100  507889 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 03:44:12.031725  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 03:44:12.052091  507889 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0116 03:44:12.052179  507889 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:44:12.063115  507889 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 03:44:12.063219  507889 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:44:12.073109  507889 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:44:12.083438  507889 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:44:12.095783  507889 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 03:44:12.107816  507889 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 03:44:12.117997  507889 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0116 03:44:12.118077  507889 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0116 03:44:12.132997  507889 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 03:44:12.145200  507889 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 03:44:12.266786  507889 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 03:44:12.460779  507889 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 03:44:12.460892  507889 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 03:44:12.469200  507889 start.go:543] Will wait 60s for crictl version
	I0116 03:44:12.469305  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:44:12.473761  507889 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 03:44:12.536262  507889 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0116 03:44:12.536382  507889 ssh_runner.go:195] Run: crio --version
	I0116 03:44:12.593212  507889 ssh_runner.go:195] Run: crio --version
	I0116 03:44:12.650197  507889 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0116 03:44:09.577389  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:44:10.077774  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:44:10.578076  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:44:10.613091  507510 api_server.go:72] duration metric: took 2.036140794s to wait for apiserver process to appear ...
	I0116 03:44:10.613124  507510 api_server.go:88] waiting for apiserver healthz status ...
	I0116 03:44:10.613148  507510 api_server.go:253] Checking apiserver healthz at https://192.168.61.167:8443/healthz ...
	I0116 03:44:11.706731  507339 node_ready.go:58] node "no-preload-666547" has status "Ready":"False"
	I0116 03:44:13.713926  507339 node_ready.go:49] node "no-preload-666547" has status "Ready":"True"
	I0116 03:44:13.713958  507339 node_ready.go:38] duration metric: took 6.512893933s waiting for node "no-preload-666547" to be "Ready" ...
	I0116 03:44:13.713972  507339 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:44:13.727930  507339 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-lr95b" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:14.740352  507339 pod_ready.go:92] pod "coredns-76f75df574-lr95b" in "kube-system" namespace has status "Ready":"True"
	I0116 03:44:14.740392  507339 pod_ready.go:81] duration metric: took 1.012371035s waiting for pod "coredns-76f75df574-lr95b" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:14.740408  507339 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-666547" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:11.442223  507257 main.go:141] libmachine: (embed-certs-615980) Waiting to get IP...
	I0116 03:44:11.443346  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:11.443787  507257 main.go:141] libmachine: (embed-certs-615980) DBG | unable to find current IP address of domain embed-certs-615980 in network mk-embed-certs-615980
	I0116 03:44:11.443851  507257 main.go:141] libmachine: (embed-certs-615980) DBG | I0116 03:44:11.443761  508731 retry.go:31] will retry after 306.7144ms: waiting for machine to come up
	I0116 03:44:11.752574  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:11.753186  507257 main.go:141] libmachine: (embed-certs-615980) DBG | unable to find current IP address of domain embed-certs-615980 in network mk-embed-certs-615980
	I0116 03:44:11.753217  507257 main.go:141] libmachine: (embed-certs-615980) DBG | I0116 03:44:11.753126  508731 retry.go:31] will retry after 270.011585ms: waiting for machine to come up
	I0116 03:44:12.024942  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:12.025507  507257 main.go:141] libmachine: (embed-certs-615980) DBG | unable to find current IP address of domain embed-certs-615980 in network mk-embed-certs-615980
	I0116 03:44:12.025548  507257 main.go:141] libmachine: (embed-certs-615980) DBG | I0116 03:44:12.025433  508731 retry.go:31] will retry after 328.680313ms: waiting for machine to come up
	I0116 03:44:12.355989  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:12.356557  507257 main.go:141] libmachine: (embed-certs-615980) DBG | unable to find current IP address of domain embed-certs-615980 in network mk-embed-certs-615980
	I0116 03:44:12.356582  507257 main.go:141] libmachine: (embed-certs-615980) DBG | I0116 03:44:12.356493  508731 retry.go:31] will retry after 598.194786ms: waiting for machine to come up
	I0116 03:44:12.956170  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:12.956754  507257 main.go:141] libmachine: (embed-certs-615980) DBG | unable to find current IP address of domain embed-certs-615980 in network mk-embed-certs-615980
	I0116 03:44:12.956782  507257 main.go:141] libmachine: (embed-certs-615980) DBG | I0116 03:44:12.956673  508731 retry.go:31] will retry after 713.891978ms: waiting for machine to come up
	I0116 03:44:13.672728  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:13.673741  507257 main.go:141] libmachine: (embed-certs-615980) DBG | unable to find current IP address of domain embed-certs-615980 in network mk-embed-certs-615980
	I0116 03:44:13.673772  507257 main.go:141] libmachine: (embed-certs-615980) DBG | I0116 03:44:13.673636  508731 retry.go:31] will retry after 789.579297ms: waiting for machine to come up
	I0116 03:44:14.464913  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:14.465532  507257 main.go:141] libmachine: (embed-certs-615980) DBG | unable to find current IP address of domain embed-certs-615980 in network mk-embed-certs-615980
	I0116 03:44:14.465567  507257 main.go:141] libmachine: (embed-certs-615980) DBG | I0116 03:44:14.465446  508731 retry.go:31] will retry after 744.319122ms: waiting for machine to come up
	I0116 03:44:15.211748  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:15.212356  507257 main.go:141] libmachine: (embed-certs-615980) DBG | unable to find current IP address of domain embed-certs-615980 in network mk-embed-certs-615980
	I0116 03:44:15.212389  507257 main.go:141] libmachine: (embed-certs-615980) DBG | I0116 03:44:15.212282  508731 retry.go:31] will retry after 1.231175582s: waiting for machine to come up
	I0116 03:44:12.652092  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetIP
	I0116 03:44:12.655815  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:12.656308  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ea:d5", ip: ""} in network mk-default-k8s-diff-port-434445: {Iface:virbr1 ExpiryTime:2024-01-16 04:44:01 +0000 UTC Type:0 Mac:52:54:00:78:ea:d5 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:default-k8s-diff-port-434445 Clientid:01:52:54:00:78:ea:d5}
	I0116 03:44:12.656383  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined IP address 192.168.50.236 and MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:12.656790  507889 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0116 03:44:12.661880  507889 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 03:44:12.677695  507889 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 03:44:12.677794  507889 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 03:44:12.731676  507889 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0116 03:44:12.731794  507889 ssh_runner.go:195] Run: which lz4
	I0116 03:44:12.736614  507889 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0116 03:44:12.741554  507889 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0116 03:44:12.741595  507889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0116 03:44:15.047223  507889 crio.go:444] Took 2.310653 seconds to copy over tarball
	I0116 03:44:15.047386  507889 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0116 03:44:15.614559  507510 api_server.go:269] stopped: https://192.168.61.167:8443/healthz: Get "https://192.168.61.167:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0116 03:44:15.614617  507510 api_server.go:253] Checking apiserver healthz at https://192.168.61.167:8443/healthz ...
	I0116 03:44:16.992197  507510 api_server.go:279] https://192.168.61.167:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 03:44:16.992236  507510 api_server.go:103] status: https://192.168.61.167:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 03:44:16.992255  507510 api_server.go:253] Checking apiserver healthz at https://192.168.61.167:8443/healthz ...
	I0116 03:44:17.098327  507510 api_server.go:279] https://192.168.61.167:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 03:44:17.098365  507510 api_server.go:103] status: https://192.168.61.167:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 03:44:17.113518  507510 api_server.go:253] Checking apiserver healthz at https://192.168.61.167:8443/healthz ...
	I0116 03:44:17.133276  507510 api_server.go:279] https://192.168.61.167:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W0116 03:44:17.133308  507510 api_server.go:103] status: https://192.168.61.167:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I0116 03:44:17.613843  507510 api_server.go:253] Checking apiserver healthz at https://192.168.61.167:8443/healthz ...
	I0116 03:44:17.621074  507510 api_server.go:279] https://192.168.61.167:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0116 03:44:17.621131  507510 api_server.go:103] status: https://192.168.61.167:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0116 03:44:18.113648  507510 api_server.go:253] Checking apiserver healthz at https://192.168.61.167:8443/healthz ...
	I0116 03:44:18.936452  507510 api_server.go:279] https://192.168.61.167:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0116 03:44:18.936492  507510 api_server.go:103] status: https://192.168.61.167:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0116 03:44:18.936521  507510 api_server.go:253] Checking apiserver healthz at https://192.168.61.167:8443/healthz ...
	I0116 03:44:19.466220  507510 api_server.go:279] https://192.168.61.167:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0116 03:44:19.466259  507510 api_server.go:103] status: https://192.168.61.167:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0116 03:44:19.466278  507510 api_server.go:253] Checking apiserver healthz at https://192.168.61.167:8443/healthz ...
	I0116 03:44:16.750170  507339 pod_ready.go:102] pod "etcd-no-preload-666547" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:19.438168  507339 pod_ready.go:92] pod "etcd-no-preload-666547" in "kube-system" namespace has status "Ready":"True"
	I0116 03:44:19.438207  507339 pod_ready.go:81] duration metric: took 4.697789344s waiting for pod "etcd-no-preload-666547" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:19.438224  507339 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-666547" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:19.445845  507339 pod_ready.go:92] pod "kube-apiserver-no-preload-666547" in "kube-system" namespace has status "Ready":"True"
	I0116 03:44:19.445875  507339 pod_ready.go:81] duration metric: took 7.641191ms waiting for pod "kube-apiserver-no-preload-666547" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:19.445889  507339 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-666547" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:19.452468  507339 pod_ready.go:92] pod "kube-controller-manager-no-preload-666547" in "kube-system" namespace has status "Ready":"True"
	I0116 03:44:19.452491  507339 pod_ready.go:81] duration metric: took 6.593311ms waiting for pod "kube-controller-manager-no-preload-666547" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:19.452500  507339 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dcmrn" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:19.459542  507339 pod_ready.go:92] pod "kube-proxy-dcmrn" in "kube-system" namespace has status "Ready":"True"
	I0116 03:44:19.459576  507339 pod_ready.go:81] duration metric: took 7.067817ms waiting for pod "kube-proxy-dcmrn" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:19.459591  507339 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-666547" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:19.966827  507339 pod_ready.go:92] pod "kube-scheduler-no-preload-666547" in "kube-system" namespace has status "Ready":"True"
	I0116 03:44:19.966867  507339 pod_ready.go:81] duration metric: took 507.26823ms waiting for pod "kube-scheduler-no-preload-666547" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:19.966878  507339 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:19.946145  507510 api_server.go:279] https://192.168.61.167:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0116 03:44:19.946209  507510 api_server.go:103] status: https://192.168.61.167:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0116 03:44:19.946230  507510 api_server.go:253] Checking apiserver healthz at https://192.168.61.167:8443/healthz ...
	I0116 03:44:20.259035  507510 api_server.go:279] https://192.168.61.167:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0116 03:44:20.259091  507510 api_server.go:103] status: https://192.168.61.167:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0116 03:44:20.259142  507510 api_server.go:253] Checking apiserver healthz at https://192.168.61.167:8443/healthz ...
	I0116 03:44:20.330196  507510 api_server.go:279] https://192.168.61.167:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0116 03:44:20.330237  507510 api_server.go:103] status: https://192.168.61.167:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0116 03:44:20.613624  507510 api_server.go:253] Checking apiserver healthz at https://192.168.61.167:8443/healthz ...
	I0116 03:44:20.621956  507510 api_server.go:279] https://192.168.61.167:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0116 03:44:20.622008  507510 api_server.go:103] status: https://192.168.61.167:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0116 03:44:21.113536  507510 api_server.go:253] Checking apiserver healthz at https://192.168.61.167:8443/healthz ...
	I0116 03:44:21.125326  507510 api_server.go:279] https://192.168.61.167:8443/healthz returned 200:
	ok
	I0116 03:44:21.137555  507510 api_server.go:141] control plane version: v1.16.0
	I0116 03:44:21.137602  507510 api_server.go:131] duration metric: took 10.524468396s to wait for apiserver health ...
	I0116 03:44:21.137616  507510 cni.go:84] Creating CNI manager for ""
	I0116 03:44:21.137625  507510 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:44:21.139682  507510 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 03:44:16.445685  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:16.446216  507257 main.go:141] libmachine: (embed-certs-615980) DBG | unable to find current IP address of domain embed-certs-615980 in network mk-embed-certs-615980
	I0116 03:44:16.446246  507257 main.go:141] libmachine: (embed-certs-615980) DBG | I0116 03:44:16.446137  508731 retry.go:31] will retry after 1.400972s: waiting for machine to come up
	I0116 03:44:17.848447  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:17.848964  507257 main.go:141] libmachine: (embed-certs-615980) DBG | unable to find current IP address of domain embed-certs-615980 in network mk-embed-certs-615980
	I0116 03:44:17.848991  507257 main.go:141] libmachine: (embed-certs-615980) DBG | I0116 03:44:17.848916  508731 retry.go:31] will retry after 2.293115324s: waiting for machine to come up
	I0116 03:44:20.145242  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:20.145899  507257 main.go:141] libmachine: (embed-certs-615980) DBG | unable to find current IP address of domain embed-certs-615980 in network mk-embed-certs-615980
	I0116 03:44:20.145933  507257 main.go:141] libmachine: (embed-certs-615980) DBG | I0116 03:44:20.145842  508731 retry.go:31] will retry after 2.158183619s: waiting for machine to come up
	I0116 03:44:18.744370  507889 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.696918616s)
	I0116 03:44:18.744426  507889 crio.go:451] Took 3.697118 seconds to extract the tarball
	I0116 03:44:18.744440  507889 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0116 03:44:18.792685  507889 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 03:44:18.868262  507889 crio.go:496] all images are preloaded for cri-o runtime.
	I0116 03:44:18.868291  507889 cache_images.go:84] Images are preloaded, skipping loading
	I0116 03:44:18.868382  507889 ssh_runner.go:195] Run: crio config
	I0116 03:44:18.954026  507889 cni.go:84] Creating CNI manager for ""
	I0116 03:44:18.954060  507889 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:44:18.954085  507889 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 03:44:18.954138  507889 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.236 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-434445 NodeName:default-k8s-diff-port-434445 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.236"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.236 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0116 03:44:18.954362  507889 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.236
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-434445"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.236
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.236"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 03:44:18.954483  507889 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-434445 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.236
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-434445 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0116 03:44:18.954557  507889 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0116 03:44:18.966046  507889 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 03:44:18.966143  507889 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0116 03:44:18.977441  507889 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I0116 03:44:18.997304  507889 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0116 03:44:19.016597  507889 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I0116 03:44:19.035635  507889 ssh_runner.go:195] Run: grep 192.168.50.236	control-plane.minikube.internal$ /etc/hosts
	I0116 03:44:19.039882  507889 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.236	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 03:44:19.053342  507889 certs.go:56] Setting up /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/default-k8s-diff-port-434445 for IP: 192.168.50.236
	I0116 03:44:19.053383  507889 certs.go:190] acquiring lock for shared ca certs: {Name:mkab5aeea3e0ed69f3592dc8f418e07c6075b130 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:44:19.053580  507889 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17965-468241/.minikube/ca.key
	I0116 03:44:19.053655  507889 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.key
	I0116 03:44:19.053773  507889 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/default-k8s-diff-port-434445/client.key
	I0116 03:44:19.053920  507889 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/default-k8s-diff-port-434445/apiserver.key.4e4dee8d
	I0116 03:44:19.053994  507889 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/default-k8s-diff-port-434445/proxy-client.key
	I0116 03:44:19.054154  507889 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478.pem (1338 bytes)
	W0116 03:44:19.054198  507889 certs.go:433] ignoring /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478_empty.pem, impossibly tiny 0 bytes
	I0116 03:44:19.054215  507889 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca-key.pem (1679 bytes)
	I0116 03:44:19.054249  507889 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem (1078 bytes)
	I0116 03:44:19.054286  507889 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem (1123 bytes)
	I0116 03:44:19.054318  507889 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem (1679 bytes)
	I0116 03:44:19.054373  507889 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem (1708 bytes)
	I0116 03:44:19.055259  507889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/default-k8s-diff-port-434445/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0116 03:44:19.086636  507889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/default-k8s-diff-port-434445/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0116 03:44:19.117759  507889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/default-k8s-diff-port-434445/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0116 03:44:19.144530  507889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/default-k8s-diff-port-434445/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0116 03:44:19.170423  507889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 03:44:19.198224  507889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0116 03:44:19.223514  507889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 03:44:19.250858  507889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 03:44:19.276922  507889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem --> /usr/share/ca-certificates/4754782.pem (1708 bytes)
	I0116 03:44:19.302621  507889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 03:44:19.330021  507889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478.pem --> /usr/share/ca-certificates/475478.pem (1338 bytes)
	I0116 03:44:19.358108  507889 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0116 03:44:19.379126  507889 ssh_runner.go:195] Run: openssl version
	I0116 03:44:19.386675  507889 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 03:44:19.398759  507889 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:44:19.404201  507889 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 02:35 /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:44:19.404283  507889 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:44:19.411067  507889 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 03:44:19.422608  507889 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/475478.pem && ln -fs /usr/share/ca-certificates/475478.pem /etc/ssl/certs/475478.pem"
	I0116 03:44:19.434422  507889 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/475478.pem
	I0116 03:44:19.440018  507889 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 02:44 /usr/share/ca-certificates/475478.pem
	I0116 03:44:19.440103  507889 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/475478.pem
	I0116 03:44:19.446469  507889 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/475478.pem /etc/ssl/certs/51391683.0"
	I0116 03:44:19.460130  507889 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4754782.pem && ln -fs /usr/share/ca-certificates/4754782.pem /etc/ssl/certs/4754782.pem"
	I0116 03:44:19.473886  507889 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4754782.pem
	I0116 03:44:19.478781  507889 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 02:44 /usr/share/ca-certificates/4754782.pem
	I0116 03:44:19.478858  507889 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4754782.pem
	I0116 03:44:19.484826  507889 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4754782.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 03:44:19.495710  507889 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 03:44:19.500842  507889 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0116 03:44:19.507646  507889 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0116 03:44:19.515247  507889 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0116 03:44:19.523964  507889 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0116 03:44:19.532379  507889 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0116 03:44:19.540067  507889 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0116 03:44:19.548614  507889 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-434445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-434445 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.50.236 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 03:44:19.548812  507889 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0116 03:44:19.548900  507889 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 03:44:19.595803  507889 cri.go:89] found id: ""
	I0116 03:44:19.595910  507889 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0116 03:44:19.610615  507889 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0116 03:44:19.610647  507889 kubeadm.go:636] restartCluster start
	I0116 03:44:19.610726  507889 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0116 03:44:19.624175  507889 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:19.625683  507889 kubeconfig.go:92] found "default-k8s-diff-port-434445" server: "https://192.168.50.236:8444"
	I0116 03:44:19.628685  507889 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0116 03:44:19.640309  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:19.640390  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:19.653938  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:20.141193  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:20.141285  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:20.154331  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:20.640562  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:20.640691  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:20.657774  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:21.141268  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:21.141371  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:21.158792  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:21.141315  507510 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 03:44:21.168450  507510 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 03:44:21.206907  507510 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 03:44:21.222998  507510 system_pods.go:59] 7 kube-system pods found
	I0116 03:44:21.223072  507510 system_pods.go:61] "coredns-5644d7b6d9-7q4wc" [003ba660-e3c5-4a98-be67-75e43dc32b37] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0116 03:44:21.223084  507510 system_pods.go:61] "etcd-old-k8s-version-696770" [b029f446-15b1-4720-af3a-b651b778fc0d] Running
	I0116 03:44:21.223094  507510 system_pods.go:61] "kube-apiserver-old-k8s-version-696770" [a9597e33-db8c-48e5-b119-d6d97d8d8e3f] Running
	I0116 03:44:21.223114  507510 system_pods.go:61] "kube-controller-manager-old-k8s-version-696770" [901fd518-04a1-4de0-baa2-08c7d57a587d] Running
	I0116 03:44:21.223123  507510 system_pods.go:61] "kube-proxy-9pfdj" [ac00ed93-abe8-4f53-8e63-fa63589fbf5c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0116 03:44:21.223134  507510 system_pods.go:61] "kube-scheduler-old-k8s-version-696770" [a8d74e76-6c22-4d82-b954-4025dff18279] Running
	I0116 03:44:21.223146  507510 system_pods.go:61] "storage-provisioner" [b04dacf9-8137-4f22-ae36-147d08fd9b60] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0116 03:44:21.223158  507510 system_pods.go:74] duration metric: took 16.220665ms to wait for pod list to return data ...
	I0116 03:44:21.223173  507510 node_conditions.go:102] verifying NodePressure condition ...
	I0116 03:44:21.228670  507510 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 03:44:21.228715  507510 node_conditions.go:123] node cpu capacity is 2
	I0116 03:44:21.228734  507510 node_conditions.go:105] duration metric: took 5.552228ms to run NodePressure ...
	I0116 03:44:21.228760  507510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:44:21.576565  507510 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0116 03:44:21.581017  507510 retry.go:31] will retry after 323.975879ms: kubelet not initialised
	I0116 03:44:21.914790  507510 retry.go:31] will retry after 258.393503ms: kubelet not initialised
	I0116 03:44:22.180592  507510 retry.go:31] will retry after 582.791922ms: kubelet not initialised
	I0116 03:44:22.769880  507510 retry.go:31] will retry after 961.779974ms: kubelet not initialised
	I0116 03:44:23.739015  507510 retry.go:31] will retry after 686.353156ms: kubelet not initialised
	I0116 03:44:24.431951  507510 retry.go:31] will retry after 2.073440094s: kubelet not initialised
	I0116 03:44:21.976301  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:23.977710  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:22.305212  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:22.305701  507257 main.go:141] libmachine: (embed-certs-615980) DBG | unable to find current IP address of domain embed-certs-615980 in network mk-embed-certs-615980
	I0116 03:44:22.305732  507257 main.go:141] libmachine: (embed-certs-615980) DBG | I0116 03:44:22.305662  508731 retry.go:31] will retry after 3.080436267s: waiting for machine to come up
	I0116 03:44:25.389414  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:25.389850  507257 main.go:141] libmachine: (embed-certs-615980) DBG | unable to find current IP address of domain embed-certs-615980 in network mk-embed-certs-615980
	I0116 03:44:25.389875  507257 main.go:141] libmachine: (embed-certs-615980) DBG | I0116 03:44:25.389828  508731 retry.go:31] will retry after 2.730339967s: waiting for machine to come up
	I0116 03:44:21.640823  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:21.641083  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:21.656391  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:22.141134  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:22.141242  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:22.157848  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:22.641247  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:22.641371  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:22.654425  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:23.140719  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:23.140827  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:23.153823  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:23.641193  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:23.641298  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:23.654061  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:24.141196  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:24.141290  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:24.161415  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:24.640416  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:24.640514  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:24.670258  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:25.140571  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:25.140673  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:25.157823  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:25.641188  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:25.641284  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:25.655917  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:26.141241  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:26.141357  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:26.157447  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:26.511961  507510 retry.go:31] will retry after 4.006598367s: kubelet not initialised
	I0116 03:44:26.473653  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:28.474914  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:28.122340  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:28.122704  507257 main.go:141] libmachine: (embed-certs-615980) DBG | unable to find current IP address of domain embed-certs-615980 in network mk-embed-certs-615980
	I0116 03:44:28.122735  507257 main.go:141] libmachine: (embed-certs-615980) DBG | I0116 03:44:28.122676  508731 retry.go:31] will retry after 4.170800657s: waiting for machine to come up
	I0116 03:44:26.641408  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:26.641510  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:26.654505  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:27.141033  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:27.141129  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:27.154208  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:27.640701  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:27.640785  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:27.653964  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:28.141330  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:28.141406  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:28.153419  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:28.640986  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:28.641076  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:28.654357  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:29.141250  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:29.141335  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:29.154899  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:29.640619  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:29.640717  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:29.654653  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:29.654692  507889 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0116 03:44:29.654701  507889 kubeadm.go:1135] stopping kube-system containers ...
	I0116 03:44:29.654713  507889 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0116 03:44:29.654769  507889 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 03:44:29.697617  507889 cri.go:89] found id: ""
	I0116 03:44:29.697719  507889 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0116 03:44:29.719069  507889 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 03:44:29.735791  507889 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 03:44:29.735872  507889 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 03:44:29.748788  507889 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0116 03:44:29.748823  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:44:29.874894  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:44:30.787232  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:44:31.009234  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:44:31.136220  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:44:31.215330  507889 api_server.go:52] waiting for apiserver process to appear ...
	I0116 03:44:31.215416  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:44:30.526372  507510 retry.go:31] will retry after 4.363756335s: kubelet not initialised
	I0116 03:44:32.295936  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:32.296442  507257 main.go:141] libmachine: (embed-certs-615980) Found IP for machine: 192.168.72.159
	I0116 03:44:32.296483  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has current primary IP address 192.168.72.159 and MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:32.296499  507257 main.go:141] libmachine: (embed-certs-615980) Reserving static IP address...
	I0116 03:44:32.297078  507257 main.go:141] libmachine: (embed-certs-615980) DBG | found host DHCP lease matching {name: "embed-certs-615980", mac: "52:54:00:d4:a6:40", ip: "192.168.72.159"} in network mk-embed-certs-615980: {Iface:virbr4 ExpiryTime:2024-01-16 04:44:24 +0000 UTC Type:0 Mac:52:54:00:d4:a6:40 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-615980 Clientid:01:52:54:00:d4:a6:40}
	I0116 03:44:32.297121  507257 main.go:141] libmachine: (embed-certs-615980) Reserved static IP address: 192.168.72.159
	I0116 03:44:32.297140  507257 main.go:141] libmachine: (embed-certs-615980) DBG | skip adding static IP to network mk-embed-certs-615980 - found existing host DHCP lease matching {name: "embed-certs-615980", mac: "52:54:00:d4:a6:40", ip: "192.168.72.159"}
	I0116 03:44:32.297160  507257 main.go:141] libmachine: (embed-certs-615980) DBG | Getting to WaitForSSH function...
	I0116 03:44:32.297179  507257 main.go:141] libmachine: (embed-certs-615980) Waiting for SSH to be available...
	I0116 03:44:32.299440  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:32.299839  507257 main.go:141] libmachine: (embed-certs-615980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:a6:40", ip: ""} in network mk-embed-certs-615980: {Iface:virbr4 ExpiryTime:2024-01-16 04:44:24 +0000 UTC Type:0 Mac:52:54:00:d4:a6:40 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-615980 Clientid:01:52:54:00:d4:a6:40}
	I0116 03:44:32.299870  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined IP address 192.168.72.159 and MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:32.300064  507257 main.go:141] libmachine: (embed-certs-615980) DBG | Using SSH client type: external
	I0116 03:44:32.300098  507257 main.go:141] libmachine: (embed-certs-615980) DBG | Using SSH private key: /home/jenkins/minikube-integration/17965-468241/.minikube/machines/embed-certs-615980/id_rsa (-rw-------)
	I0116 03:44:32.300133  507257 main.go:141] libmachine: (embed-certs-615980) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.159 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17965-468241/.minikube/machines/embed-certs-615980/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0116 03:44:32.300153  507257 main.go:141] libmachine: (embed-certs-615980) DBG | About to run SSH command:
	I0116 03:44:32.300172  507257 main.go:141] libmachine: (embed-certs-615980) DBG | exit 0
	I0116 03:44:32.396718  507257 main.go:141] libmachine: (embed-certs-615980) DBG | SSH cmd err, output: <nil>: 
	I0116 03:44:32.397111  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetConfigRaw
	I0116 03:44:32.397901  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetIP
	I0116 03:44:32.400997  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:32.401502  507257 main.go:141] libmachine: (embed-certs-615980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:a6:40", ip: ""} in network mk-embed-certs-615980: {Iface:virbr4 ExpiryTime:2024-01-16 04:44:24 +0000 UTC Type:0 Mac:52:54:00:d4:a6:40 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-615980 Clientid:01:52:54:00:d4:a6:40}
	I0116 03:44:32.401540  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined IP address 192.168.72.159 and MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:32.402036  507257 profile.go:148] Saving config to /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/embed-certs-615980/config.json ...
	I0116 03:44:32.402259  507257 machine.go:88] provisioning docker machine ...
	I0116 03:44:32.402281  507257 main.go:141] libmachine: (embed-certs-615980) Calling .DriverName
	I0116 03:44:32.402539  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetMachineName
	I0116 03:44:32.402759  507257 buildroot.go:166] provisioning hostname "embed-certs-615980"
	I0116 03:44:32.402786  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetMachineName
	I0116 03:44:32.402966  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHHostname
	I0116 03:44:32.405935  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:32.406344  507257 main.go:141] libmachine: (embed-certs-615980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:a6:40", ip: ""} in network mk-embed-certs-615980: {Iface:virbr4 ExpiryTime:2024-01-16 04:44:24 +0000 UTC Type:0 Mac:52:54:00:d4:a6:40 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-615980 Clientid:01:52:54:00:d4:a6:40}
	I0116 03:44:32.406384  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined IP address 192.168.72.159 and MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:32.406585  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHPort
	I0116 03:44:32.406821  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHKeyPath
	I0116 03:44:32.407054  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHKeyPath
	I0116 03:44:32.407219  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHUsername
	I0116 03:44:32.407399  507257 main.go:141] libmachine: Using SSH client type: native
	I0116 03:44:32.407754  507257 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I0116 03:44:32.407768  507257 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-615980 && echo "embed-certs-615980" | sudo tee /etc/hostname
	I0116 03:44:32.561584  507257 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-615980
	
	I0116 03:44:32.561618  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHHostname
	I0116 03:44:32.564566  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:32.565004  507257 main.go:141] libmachine: (embed-certs-615980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:a6:40", ip: ""} in network mk-embed-certs-615980: {Iface:virbr4 ExpiryTime:2024-01-16 04:44:24 +0000 UTC Type:0 Mac:52:54:00:d4:a6:40 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-615980 Clientid:01:52:54:00:d4:a6:40}
	I0116 03:44:32.565033  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined IP address 192.168.72.159 and MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:32.565232  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHPort
	I0116 03:44:32.565481  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHKeyPath
	I0116 03:44:32.565672  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHKeyPath
	I0116 03:44:32.565843  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHUsername
	I0116 03:44:32.566045  507257 main.go:141] libmachine: Using SSH client type: native
	I0116 03:44:32.566521  507257 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I0116 03:44:32.566549  507257 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-615980' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-615980/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-615980' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 03:44:32.718945  507257 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 03:44:32.719005  507257 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17965-468241/.minikube CaCertPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17965-468241/.minikube}
	I0116 03:44:32.719037  507257 buildroot.go:174] setting up certificates
	I0116 03:44:32.719051  507257 provision.go:83] configureAuth start
	I0116 03:44:32.719081  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetMachineName
	I0116 03:44:32.719397  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetIP
	I0116 03:44:32.722474  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:32.722938  507257 main.go:141] libmachine: (embed-certs-615980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:a6:40", ip: ""} in network mk-embed-certs-615980: {Iface:virbr4 ExpiryTime:2024-01-16 04:44:24 +0000 UTC Type:0 Mac:52:54:00:d4:a6:40 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-615980 Clientid:01:52:54:00:d4:a6:40}
	I0116 03:44:32.722972  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined IP address 192.168.72.159 and MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:32.723136  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHHostname
	I0116 03:44:32.725821  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:32.726246  507257 main.go:141] libmachine: (embed-certs-615980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:a6:40", ip: ""} in network mk-embed-certs-615980: {Iface:virbr4 ExpiryTime:2024-01-16 04:44:24 +0000 UTC Type:0 Mac:52:54:00:d4:a6:40 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-615980 Clientid:01:52:54:00:d4:a6:40}
	I0116 03:44:32.726277  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined IP address 192.168.72.159 and MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:32.726448  507257 provision.go:138] copyHostCerts
	I0116 03:44:32.726535  507257 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem, removing ...
	I0116 03:44:32.726622  507257 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem
	I0116 03:44:32.726769  507257 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem (1078 bytes)
	I0116 03:44:32.726971  507257 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem, removing ...
	I0116 03:44:32.726983  507257 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem
	I0116 03:44:32.727015  507257 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem (1123 bytes)
	I0116 03:44:32.727099  507257 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem, removing ...
	I0116 03:44:32.727116  507257 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem
	I0116 03:44:32.727144  507257 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem (1679 bytes)
	I0116 03:44:32.727212  507257 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca-key.pem org=jenkins.embed-certs-615980 san=[192.168.72.159 192.168.72.159 localhost 127.0.0.1 minikube embed-certs-615980]
	I0116 03:44:32.921694  507257 provision.go:172] copyRemoteCerts
	I0116 03:44:32.921764  507257 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 03:44:32.921798  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHHostname
	I0116 03:44:32.924951  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:32.925329  507257 main.go:141] libmachine: (embed-certs-615980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:a6:40", ip: ""} in network mk-embed-certs-615980: {Iface:virbr4 ExpiryTime:2024-01-16 04:44:24 +0000 UTC Type:0 Mac:52:54:00:d4:a6:40 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-615980 Clientid:01:52:54:00:d4:a6:40}
	I0116 03:44:32.925362  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined IP address 192.168.72.159 and MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:32.925534  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHPort
	I0116 03:44:32.925855  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHKeyPath
	I0116 03:44:32.926135  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHUsername
	I0116 03:44:32.926390  507257 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/embed-certs-615980/id_rsa Username:docker}
	I0116 03:44:33.025856  507257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0116 03:44:33.055403  507257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0116 03:44:33.087908  507257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0116 03:44:33.116847  507257 provision.go:86] duration metric: configureAuth took 397.777297ms
	I0116 03:44:33.116886  507257 buildroot.go:189] setting minikube options for container-runtime
	I0116 03:44:33.117136  507257 config.go:182] Loaded profile config "embed-certs-615980": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 03:44:33.117267  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHHostname
	I0116 03:44:33.120452  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:33.120915  507257 main.go:141] libmachine: (embed-certs-615980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:a6:40", ip: ""} in network mk-embed-certs-615980: {Iface:virbr4 ExpiryTime:2024-01-16 04:44:24 +0000 UTC Type:0 Mac:52:54:00:d4:a6:40 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-615980 Clientid:01:52:54:00:d4:a6:40}
	I0116 03:44:33.120949  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined IP address 192.168.72.159 and MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:33.121189  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHPort
	I0116 03:44:33.121442  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHKeyPath
	I0116 03:44:33.121636  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHKeyPath
	I0116 03:44:33.121778  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHUsername
	I0116 03:44:33.121966  507257 main.go:141] libmachine: Using SSH client type: native
	I0116 03:44:33.122333  507257 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I0116 03:44:33.122359  507257 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 03:44:33.486009  507257 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 03:44:33.486147  507257 machine.go:91] provisioned docker machine in 1.083869863s
	I0116 03:44:33.486202  507257 start.go:300] post-start starting for "embed-certs-615980" (driver="kvm2")
	I0116 03:44:33.486239  507257 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 03:44:33.486282  507257 main.go:141] libmachine: (embed-certs-615980) Calling .DriverName
	I0116 03:44:33.486719  507257 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 03:44:33.486755  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHHostname
	I0116 03:44:33.490226  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:33.490676  507257 main.go:141] libmachine: (embed-certs-615980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:a6:40", ip: ""} in network mk-embed-certs-615980: {Iface:virbr4 ExpiryTime:2024-01-16 04:44:24 +0000 UTC Type:0 Mac:52:54:00:d4:a6:40 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-615980 Clientid:01:52:54:00:d4:a6:40}
	I0116 03:44:33.490743  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined IP address 192.168.72.159 and MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:33.490863  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHPort
	I0116 03:44:33.491117  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHKeyPath
	I0116 03:44:33.491299  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHUsername
	I0116 03:44:33.491478  507257 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/embed-certs-615980/id_rsa Username:docker}
	I0116 03:44:33.590039  507257 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 03:44:33.596095  507257 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 03:44:33.596124  507257 filesync.go:126] Scanning /home/jenkins/minikube-integration/17965-468241/.minikube/addons for local assets ...
	I0116 03:44:33.596206  507257 filesync.go:126] Scanning /home/jenkins/minikube-integration/17965-468241/.minikube/files for local assets ...
	I0116 03:44:33.596295  507257 filesync.go:149] local asset: /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem -> 4754782.pem in /etc/ssl/certs
	I0116 03:44:33.596437  507257 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 03:44:33.609260  507257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem --> /etc/ssl/certs/4754782.pem (1708 bytes)
	I0116 03:44:33.642578  507257 start.go:303] post-start completed in 156.336318ms
	I0116 03:44:33.642651  507257 fix.go:56] fixHost completed within 22.644969219s
	I0116 03:44:33.642703  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHHostname
	I0116 03:44:33.645616  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:33.645988  507257 main.go:141] libmachine: (embed-certs-615980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:a6:40", ip: ""} in network mk-embed-certs-615980: {Iface:virbr4 ExpiryTime:2024-01-16 04:44:24 +0000 UTC Type:0 Mac:52:54:00:d4:a6:40 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-615980 Clientid:01:52:54:00:d4:a6:40}
	I0116 03:44:33.646017  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined IP address 192.168.72.159 and MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:33.646277  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHPort
	I0116 03:44:33.646514  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHKeyPath
	I0116 03:44:33.646720  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHKeyPath
	I0116 03:44:33.646910  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHUsername
	I0116 03:44:33.647179  507257 main.go:141] libmachine: Using SSH client type: native
	I0116 03:44:33.647655  507257 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I0116 03:44:33.647682  507257 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0116 03:44:33.781805  507257 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705376673.706960834
	
	I0116 03:44:33.781839  507257 fix.go:206] guest clock: 1705376673.706960834
	I0116 03:44:33.781850  507257 fix.go:219] Guest: 2024-01-16 03:44:33.706960834 +0000 UTC Remote: 2024-01-16 03:44:33.642657737 +0000 UTC m=+367.429386706 (delta=64.303097ms)
	I0116 03:44:33.781879  507257 fix.go:190] guest clock delta is within tolerance: 64.303097ms
	I0116 03:44:33.781890  507257 start.go:83] releasing machines lock for "embed-certs-615980", held for 22.784266536s
	I0116 03:44:33.781917  507257 main.go:141] libmachine: (embed-certs-615980) Calling .DriverName
	I0116 03:44:33.782225  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetIP
	I0116 03:44:33.785113  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:33.785495  507257 main.go:141] libmachine: (embed-certs-615980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:a6:40", ip: ""} in network mk-embed-certs-615980: {Iface:virbr4 ExpiryTime:2024-01-16 04:44:24 +0000 UTC Type:0 Mac:52:54:00:d4:a6:40 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-615980 Clientid:01:52:54:00:d4:a6:40}
	I0116 03:44:33.785530  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined IP address 192.168.72.159 and MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:33.785718  507257 main.go:141] libmachine: (embed-certs-615980) Calling .DriverName
	I0116 03:44:33.786427  507257 main.go:141] libmachine: (embed-certs-615980) Calling .DriverName
	I0116 03:44:33.786655  507257 main.go:141] libmachine: (embed-certs-615980) Calling .DriverName
	I0116 03:44:33.786751  507257 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 03:44:33.786799  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHHostname
	I0116 03:44:33.786938  507257 ssh_runner.go:195] Run: cat /version.json
	I0116 03:44:33.786967  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHHostname
	I0116 03:44:33.790084  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:33.790288  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:33.790454  507257 main.go:141] libmachine: (embed-certs-615980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:a6:40", ip: ""} in network mk-embed-certs-615980: {Iface:virbr4 ExpiryTime:2024-01-16 04:44:24 +0000 UTC Type:0 Mac:52:54:00:d4:a6:40 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-615980 Clientid:01:52:54:00:d4:a6:40}
	I0116 03:44:33.790485  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined IP address 192.168.72.159 and MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:33.790655  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHPort
	I0116 03:44:33.790787  507257 main.go:141] libmachine: (embed-certs-615980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:a6:40", ip: ""} in network mk-embed-certs-615980: {Iface:virbr4 ExpiryTime:2024-01-16 04:44:24 +0000 UTC Type:0 Mac:52:54:00:d4:a6:40 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-615980 Clientid:01:52:54:00:d4:a6:40}
	I0116 03:44:33.790831  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined IP address 192.168.72.159 and MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:33.790899  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHKeyPath
	I0116 03:44:33.791007  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHPort
	I0116 03:44:33.791091  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHUsername
	I0116 03:44:33.791193  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHKeyPath
	I0116 03:44:33.791269  507257 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/embed-certs-615980/id_rsa Username:docker}
	I0116 03:44:33.791363  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHUsername
	I0116 03:44:33.791515  507257 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/embed-certs-615980/id_rsa Username:docker}
	I0116 03:44:33.907036  507257 ssh_runner.go:195] Run: systemctl --version
	I0116 03:44:33.913776  507257 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 03:44:34.062888  507257 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0116 03:44:34.070435  507257 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 03:44:34.070539  507257 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 03:44:34.091957  507257 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0116 03:44:34.091993  507257 start.go:475] detecting cgroup driver to use...
	I0116 03:44:34.092099  507257 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 03:44:34.108007  507257 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 03:44:34.123223  507257 docker.go:217] disabling cri-docker service (if available) ...
	I0116 03:44:34.123314  507257 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 03:44:34.141242  507257 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 03:44:34.157053  507257 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 03:44:34.274186  507257 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 03:44:34.427694  507257 docker.go:233] disabling docker service ...
	I0116 03:44:34.427785  507257 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 03:44:34.442789  507257 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 03:44:34.459761  507257 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 03:44:34.592453  507257 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 03:44:34.715991  507257 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 03:44:34.732175  507257 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 03:44:34.751885  507257 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0116 03:44:34.751989  507257 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:44:34.763769  507257 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 03:44:34.763853  507257 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:44:34.774444  507257 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:44:34.784975  507257 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:44:34.797634  507257 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 03:44:34.810962  507257 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 03:44:34.822224  507257 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0116 03:44:34.822314  507257 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0116 03:44:34.840500  507257 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 03:44:34.852285  507257 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 03:44:34.970828  507257 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 03:44:35.163097  507257 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 03:44:35.163242  507257 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 03:44:35.169041  507257 start.go:543] Will wait 60s for crictl version
	I0116 03:44:35.169150  507257 ssh_runner.go:195] Run: which crictl
	I0116 03:44:35.173367  507257 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 03:44:35.224951  507257 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0116 03:44:35.225043  507257 ssh_runner.go:195] Run: crio --version
	I0116 03:44:35.275230  507257 ssh_runner.go:195] Run: crio --version
	I0116 03:44:35.329852  507257 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0116 03:44:30.981714  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:33.476735  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:35.480715  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:35.331327  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetIP
	I0116 03:44:35.334148  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:35.334618  507257 main.go:141] libmachine: (embed-certs-615980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:a6:40", ip: ""} in network mk-embed-certs-615980: {Iface:virbr4 ExpiryTime:2024-01-16 04:44:24 +0000 UTC Type:0 Mac:52:54:00:d4:a6:40 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-615980 Clientid:01:52:54:00:d4:a6:40}
	I0116 03:44:35.334674  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined IP address 192.168.72.159 and MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:35.335166  507257 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0116 03:44:35.341389  507257 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 03:44:35.358757  507257 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 03:44:35.358866  507257 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 03:44:35.407869  507257 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0116 03:44:35.407983  507257 ssh_runner.go:195] Run: which lz4
	I0116 03:44:35.412533  507257 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0116 03:44:35.417266  507257 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0116 03:44:35.417303  507257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0116 03:44:31.715897  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:44:32.215978  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:44:32.716439  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:44:33.215609  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:44:33.715785  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:44:33.738611  507889 api_server.go:72] duration metric: took 2.523281585s to wait for apiserver process to appear ...
	I0116 03:44:33.738642  507889 api_server.go:88] waiting for apiserver healthz status ...
	I0116 03:44:33.738663  507889 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8444/healthz ...
	I0116 03:44:37.601011  507889 api_server.go:279] https://192.168.50.236:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 03:44:37.601052  507889 api_server.go:103] status: https://192.168.50.236:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 03:44:37.601072  507889 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8444/healthz ...
	I0116 03:44:37.678390  507889 api_server.go:279] https://192.168.50.236:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 03:44:37.678428  507889 api_server.go:103] status: https://192.168.50.236:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 03:44:37.739725  507889 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8444/healthz ...
	I0116 03:44:37.767384  507889 api_server.go:279] https://192.168.50.236:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 03:44:37.767425  507889 api_server.go:103] status: https://192.168.50.236:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 03:44:38.238992  507889 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8444/healthz ...
	I0116 03:44:38.253946  507889 api_server.go:279] https://192.168.50.236:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 03:44:38.253991  507889 api_server.go:103] status: https://192.168.50.236:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 03:44:38.738786  507889 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8444/healthz ...
	I0116 03:44:38.749091  507889 api_server.go:279] https://192.168.50.236:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 03:44:38.749135  507889 api_server.go:103] status: https://192.168.50.236:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 03:44:39.239814  507889 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8444/healthz ...
	I0116 03:44:39.245859  507889 api_server.go:279] https://192.168.50.236:8444/healthz returned 200:
	ok
	I0116 03:44:39.259198  507889 api_server.go:141] control plane version: v1.28.4
	I0116 03:44:39.259250  507889 api_server.go:131] duration metric: took 5.520598732s to wait for apiserver health ...
	I0116 03:44:39.259265  507889 cni.go:84] Creating CNI manager for ""
	I0116 03:44:39.259277  507889 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:44:39.261389  507889 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 03:44:34.897727  507510 retry.go:31] will retry after 6.879493351s: kubelet not initialised
	I0116 03:44:37.975671  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:39.979781  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:37.524763  507257 crio.go:444] Took 2.112278 seconds to copy over tarball
	I0116 03:44:37.524843  507257 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0116 03:44:40.706515  507257 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.181629969s)
	I0116 03:44:40.706559  507257 crio.go:451] Took 3.181765 seconds to extract the tarball
	I0116 03:44:40.706574  507257 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0116 03:44:40.751207  507257 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 03:44:40.905548  507257 crio.go:496] all images are preloaded for cri-o runtime.
	I0116 03:44:40.905578  507257 cache_images.go:84] Images are preloaded, skipping loading
	I0116 03:44:40.905659  507257 ssh_runner.go:195] Run: crio config
	I0116 03:44:40.965159  507257 cni.go:84] Creating CNI manager for ""
	I0116 03:44:40.965194  507257 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:44:40.965220  507257 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 03:44:40.965263  507257 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.159 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-615980 NodeName:embed-certs-615980 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.159"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.159 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0116 03:44:40.965474  507257 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.159
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-615980"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.159
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.159"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 03:44:40.965578  507257 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-615980 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.159
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-615980 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0116 03:44:40.965634  507257 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0116 03:44:40.976015  507257 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 03:44:40.976153  507257 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0116 03:44:40.986610  507257 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0116 03:44:41.005297  507257 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0116 03:44:41.026383  507257 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0116 03:44:41.046554  507257 ssh_runner.go:195] Run: grep 192.168.72.159	control-plane.minikube.internal$ /etc/hosts
	I0116 03:44:41.050940  507257 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.159	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 03:44:41.064516  507257 certs.go:56] Setting up /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/embed-certs-615980 for IP: 192.168.72.159
	I0116 03:44:41.064568  507257 certs.go:190] acquiring lock for shared ca certs: {Name:mkab5aeea3e0ed69f3592dc8f418e07c6075b130 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:44:41.064749  507257 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17965-468241/.minikube/ca.key
	I0116 03:44:41.064813  507257 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.key
	I0116 03:44:41.064917  507257 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/embed-certs-615980/client.key
	I0116 03:44:41.064989  507257 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/embed-certs-615980/apiserver.key.fc98a751
	I0116 03:44:41.065044  507257 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/embed-certs-615980/proxy-client.key
	I0116 03:44:41.065202  507257 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478.pem (1338 bytes)
	W0116 03:44:41.065241  507257 certs.go:433] ignoring /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478_empty.pem, impossibly tiny 0 bytes
	I0116 03:44:41.065257  507257 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca-key.pem (1679 bytes)
	I0116 03:44:41.065294  507257 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem (1078 bytes)
	I0116 03:44:41.065331  507257 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem (1123 bytes)
	I0116 03:44:41.065374  507257 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem (1679 bytes)
	I0116 03:44:41.065432  507257 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem (1708 bytes)
	I0116 03:44:41.066147  507257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/embed-certs-615980/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0116 03:44:41.092714  507257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/embed-certs-615980/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0116 03:44:41.119109  507257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/embed-certs-615980/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0116 03:44:41.147059  507257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/embed-certs-615980/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0116 03:44:41.176357  507257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 03:44:41.202082  507257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0116 03:44:41.228263  507257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 03:44:41.252892  507257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 03:44:39.263119  507889 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 03:44:39.290175  507889 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 03:44:39.319009  507889 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 03:44:39.341195  507889 system_pods.go:59] 9 kube-system pods found
	I0116 03:44:39.341251  507889 system_pods.go:61] "coredns-5dd5756b68-f8shl" [18bddcd6-4305-4856-b590-e16c362768e3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0116 03:44:39.341264  507889 system_pods.go:61] "coredns-5dd5756b68-pmx8n" [9d3c9941-8938-4d4a-b6d8-9b542ca6f1ca] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0116 03:44:39.341280  507889 system_pods.go:61] "etcd-default-k8s-diff-port-434445" [a2327159-d034-4f62-a8f5-14062a41c75d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0116 03:44:39.341293  507889 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-434445" [75c5fc5e-86b1-42cf-bb5c-8a1f8b4e13db] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0116 03:44:39.341310  507889 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-434445" [2568dd4f-124d-4c4f-8651-3b89b2b54983] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0116 03:44:39.341323  507889 system_pods.go:61] "kube-proxy-dcbqg" [eba1f9bf-6aa7-40cd-b57c-745c2d0cc414] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0116 03:44:39.341335  507889 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-434445" [479952a1-8c3d-49b4-bd22-fef988e6b83d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0116 03:44:39.341353  507889 system_pods.go:61] "metrics-server-57f55c9bc5-894n2" [46e4892a-d026-4a9d-88bc-128e92848808] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:44:39.341369  507889 system_pods.go:61] "storage-provisioner" [16fd4585-3d75-40c3-a28d-4134375f4e3d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0116 03:44:39.341391  507889 system_pods.go:74] duration metric: took 22.354405ms to wait for pod list to return data ...
	I0116 03:44:39.341403  507889 node_conditions.go:102] verifying NodePressure condition ...
	I0116 03:44:39.349904  507889 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 03:44:39.349954  507889 node_conditions.go:123] node cpu capacity is 2
	I0116 03:44:39.349972  507889 node_conditions.go:105] duration metric: took 8.557095ms to run NodePressure ...
	I0116 03:44:39.350000  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:44:39.798882  507889 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0116 03:44:39.816480  507889 kubeadm.go:787] kubelet initialised
	I0116 03:44:39.816514  507889 kubeadm.go:788] duration metric: took 17.598017ms waiting for restarted kubelet to initialise ...
	I0116 03:44:39.816527  507889 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:44:39.834946  507889 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-f8shl" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:39.854785  507889 pod_ready.go:97] node "default-k8s-diff-port-434445" hosting pod "coredns-5dd5756b68-f8shl" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-434445" has status "Ready":"False"
	I0116 03:44:39.854832  507889 pod_ready.go:81] duration metric: took 19.846427ms waiting for pod "coredns-5dd5756b68-f8shl" in "kube-system" namespace to be "Ready" ...
	E0116 03:44:39.854846  507889 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-434445" hosting pod "coredns-5dd5756b68-f8shl" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-434445" has status "Ready":"False"
	I0116 03:44:39.854864  507889 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-pmx8n" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:39.888659  507889 pod_ready.go:97] node "default-k8s-diff-port-434445" hosting pod "coredns-5dd5756b68-pmx8n" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-434445" has status "Ready":"False"
	I0116 03:44:39.888703  507889 pod_ready.go:81] duration metric: took 33.827201ms waiting for pod "coredns-5dd5756b68-pmx8n" in "kube-system" namespace to be "Ready" ...
	E0116 03:44:39.888718  507889 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-434445" hosting pod "coredns-5dd5756b68-pmx8n" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-434445" has status "Ready":"False"
	I0116 03:44:39.888728  507889 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-434445" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:39.897638  507889 pod_ready.go:97] node "default-k8s-diff-port-434445" hosting pod "etcd-default-k8s-diff-port-434445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-434445" has status "Ready":"False"
	I0116 03:44:39.897674  507889 pod_ready.go:81] duration metric: took 8.927103ms waiting for pod "etcd-default-k8s-diff-port-434445" in "kube-system" namespace to be "Ready" ...
	E0116 03:44:39.897693  507889 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-434445" hosting pod "etcd-default-k8s-diff-port-434445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-434445" has status "Ready":"False"
	I0116 03:44:39.897701  507889 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-434445" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:39.919418  507889 pod_ready.go:97] node "default-k8s-diff-port-434445" hosting pod "kube-apiserver-default-k8s-diff-port-434445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-434445" has status "Ready":"False"
	I0116 03:44:39.919465  507889 pod_ready.go:81] duration metric: took 21.753159ms waiting for pod "kube-apiserver-default-k8s-diff-port-434445" in "kube-system" namespace to be "Ready" ...
	E0116 03:44:39.919495  507889 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-434445" hosting pod "kube-apiserver-default-k8s-diff-port-434445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-434445" has status "Ready":"False"
	I0116 03:44:39.919505  507889 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-434445" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:40.203370  507889 pod_ready.go:97] node "default-k8s-diff-port-434445" hosting pod "kube-controller-manager-default-k8s-diff-port-434445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-434445" has status "Ready":"False"
	I0116 03:44:40.203411  507889 pod_ready.go:81] duration metric: took 283.893646ms waiting for pod "kube-controller-manager-default-k8s-diff-port-434445" in "kube-system" namespace to be "Ready" ...
	E0116 03:44:40.203428  507889 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-434445" hosting pod "kube-controller-manager-default-k8s-diff-port-434445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-434445" has status "Ready":"False"
	I0116 03:44:40.203440  507889 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-dcbqg" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:41.417889  507889 pod_ready.go:97] node "default-k8s-diff-port-434445" hosting pod "kube-proxy-dcbqg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-434445" has status "Ready":"False"
	I0116 03:44:41.418011  507889 pod_ready.go:81] duration metric: took 1.214559235s waiting for pod "kube-proxy-dcbqg" in "kube-system" namespace to be "Ready" ...
	E0116 03:44:41.418033  507889 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-434445" hosting pod "kube-proxy-dcbqg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-434445" has status "Ready":"False"
	I0116 03:44:41.418043  507889 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-434445" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:41.425177  507889 pod_ready.go:97] node "default-k8s-diff-port-434445" hosting pod "kube-scheduler-default-k8s-diff-port-434445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-434445" has status "Ready":"False"
	I0116 03:44:41.425208  507889 pod_ready.go:81] duration metric: took 7.15251ms waiting for pod "kube-scheduler-default-k8s-diff-port-434445" in "kube-system" namespace to be "Ready" ...
	E0116 03:44:41.425220  507889 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-434445" hosting pod "kube-scheduler-default-k8s-diff-port-434445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-434445" has status "Ready":"False"
	I0116 03:44:41.425226  507889 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:41.431059  507889 pod_ready.go:97] node "default-k8s-diff-port-434445" hosting pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-434445" has status "Ready":"False"
	I0116 03:44:41.431103  507889 pod_ready.go:81] duration metric: took 5.869165ms waiting for pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace to be "Ready" ...
	E0116 03:44:41.431115  507889 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-434445" hosting pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-434445" has status "Ready":"False"
	I0116 03:44:41.431122  507889 pod_ready.go:38] duration metric: took 1.614582832s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:44:41.431139  507889 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0116 03:44:41.445099  507889 ops.go:34] apiserver oom_adj: -16
	I0116 03:44:41.445129  507889 kubeadm.go:640] restartCluster took 21.83447374s
	I0116 03:44:41.445141  507889 kubeadm.go:406] StartCluster complete in 21.896543184s
	I0116 03:44:41.445168  507889 settings.go:142] acquiring lock: {Name:mk3ded99c06d285b174e76622bb0741c8dd2d8fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:44:41.445265  507889 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17965-468241/kubeconfig
	I0116 03:44:41.447590  507889 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-468241/kubeconfig: {Name:mkac3c63bbcba9b4fa1bea3480797757d703fb9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:44:41.544520  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0116 03:44:41.544743  507889 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0116 03:44:41.544842  507889 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-434445"
	I0116 03:44:41.544858  507889 config.go:182] Loaded profile config "default-k8s-diff-port-434445": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 03:44:41.544875  507889 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-434445"
	I0116 03:44:41.544891  507889 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-434445"
	W0116 03:44:41.544899  507889 addons.go:243] addon metrics-server should already be in state true
	I0116 03:44:41.544865  507889 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-434445"
	W0116 03:44:41.544915  507889 addons.go:243] addon storage-provisioner should already be in state true
	I0116 03:44:41.544971  507889 host.go:66] Checking if "default-k8s-diff-port-434445" exists ...
	I0116 03:44:41.544973  507889 host.go:66] Checking if "default-k8s-diff-port-434445" exists ...
	I0116 03:44:41.544862  507889 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-434445"
	I0116 03:44:41.545107  507889 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-434445"
	I0116 03:44:41.545473  507889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:44:41.545479  507889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:44:41.545505  507889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:44:41.545673  507889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:44:41.562983  507889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44387
	I0116 03:44:41.562984  507889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35715
	I0116 03:44:41.563677  507889 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:44:41.563684  507889 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:44:41.564352  507889 main.go:141] libmachine: Using API Version  1
	I0116 03:44:41.564382  507889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:44:41.564540  507889 main.go:141] libmachine: Using API Version  1
	I0116 03:44:41.564569  507889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:44:41.564753  507889 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:44:41.564937  507889 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:44:41.565113  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetState
	I0116 03:44:41.565350  507889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:44:41.565418  507889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:44:41.569050  507889 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-434445"
	W0116 03:44:41.569091  507889 addons.go:243] addon default-storageclass should already be in state true
	I0116 03:44:41.569125  507889 host.go:66] Checking if "default-k8s-diff-port-434445" exists ...
	I0116 03:44:41.569554  507889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:44:41.569613  507889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:44:41.584107  507889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33349
	I0116 03:44:41.584756  507889 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:44:41.585422  507889 main.go:141] libmachine: Using API Version  1
	I0116 03:44:41.585457  507889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:44:41.585634  507889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42645
	I0116 03:44:41.585856  507889 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:44:41.586123  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetState
	I0116 03:44:41.586162  507889 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:44:41.586636  507889 main.go:141] libmachine: Using API Version  1
	I0116 03:44:41.586663  507889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:44:41.587080  507889 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:44:41.587688  507889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:44:41.587743  507889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:44:41.588214  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .DriverName
	I0116 03:44:41.606456  507889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40289
	I0116 03:44:41.644090  507889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:44:41.819945  507889 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:44:41.929214  507889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:44:41.929680  507889 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:44:42.246642  507889 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 03:44:42.246665  507889 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0116 03:44:42.246696  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHHostname
	I0116 03:44:42.247294  507889 main.go:141] libmachine: Using API Version  1
	I0116 03:44:42.247332  507889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:44:42.247740  507889 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:44:42.247987  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetState
	I0116 03:44:42.250254  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .DriverName
	I0116 03:44:42.250570  507889 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0116 03:44:42.250588  507889 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0116 03:44:42.250609  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHHostname
	I0116 03:44:42.251130  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:42.251863  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ea:d5", ip: ""} in network mk-default-k8s-diff-port-434445: {Iface:virbr1 ExpiryTime:2024-01-16 04:44:01 +0000 UTC Type:0 Mac:52:54:00:78:ea:d5 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:default-k8s-diff-port-434445 Clientid:01:52:54:00:78:ea:d5}
	I0116 03:44:42.251896  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined IP address 192.168.50.236 and MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:42.252245  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHPort
	I0116 03:44:42.252473  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHKeyPath
	I0116 03:44:42.252680  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHUsername
	I0116 03:44:42.252842  507889 sshutil.go:53] new ssh client: &{IP:192.168.50.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/default-k8s-diff-port-434445/id_rsa Username:docker}
	I0116 03:44:42.254224  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:42.254837  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ea:d5", ip: ""} in network mk-default-k8s-diff-port-434445: {Iface:virbr1 ExpiryTime:2024-01-16 04:44:01 +0000 UTC Type:0 Mac:52:54:00:78:ea:d5 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:default-k8s-diff-port-434445 Clientid:01:52:54:00:78:ea:d5}
	I0116 03:44:42.254872  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined IP address 192.168.50.236 and MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:42.255050  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHPort
	I0116 03:44:42.255240  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHKeyPath
	I0116 03:44:42.255422  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHUsername
	I0116 03:44:42.255585  507889 sshutil.go:53] new ssh client: &{IP:192.168.50.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/default-k8s-diff-port-434445/id_rsa Username:docker}
	I0116 03:44:42.264367  507889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36555
	I0116 03:44:42.264832  507889 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:44:42.265322  507889 main.go:141] libmachine: Using API Version  1
	I0116 03:44:42.265352  507889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:44:42.265700  507889 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:44:42.266266  507889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:44:42.266306  507889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:44:42.281852  507889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32929
	I0116 03:44:42.282351  507889 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:44:42.282914  507889 main.go:141] libmachine: Using API Version  1
	I0116 03:44:42.282944  507889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:44:42.283363  507889 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:44:42.283599  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetState
	I0116 03:44:42.285584  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .DriverName
	I0116 03:44:42.395709  507889 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0116 03:44:42.398672  507889 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 03:44:42.493544  507889 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0116 03:44:42.531626  507889 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0116 03:44:42.531683  507889 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0116 03:44:42.531717  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHHostname
	I0116 03:44:42.535980  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:42.536575  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ea:d5", ip: ""} in network mk-default-k8s-diff-port-434445: {Iface:virbr1 ExpiryTime:2024-01-16 04:44:01 +0000 UTC Type:0 Mac:52:54:00:78:ea:d5 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:default-k8s-diff-port-434445 Clientid:01:52:54:00:78:ea:d5}
	I0116 03:44:42.536604  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined IP address 192.168.50.236 and MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:42.537018  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHPort
	I0116 03:44:42.537286  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHKeyPath
	I0116 03:44:42.537510  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHUsername
	I0116 03:44:42.537850  507889 sshutil.go:53] new ssh client: &{IP:192.168.50.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/default-k8s-diff-port-434445/id_rsa Username:docker}
	I0116 03:44:42.545910  507889 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.001352094s)
	I0116 03:44:42.545983  507889 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0116 03:44:42.713693  507889 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0116 03:44:42.713718  507889 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0116 03:44:42.752674  507889 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0116 03:44:42.752717  507889 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0116 03:44:42.790178  507889 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 03:44:42.790214  507889 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0116 03:44:42.825256  507889 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 03:44:43.010741  507889 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-434445" context rescaled to 1 replicas
	I0116 03:44:43.010801  507889 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.236 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 03:44:43.014031  507889 out.go:177] * Verifying Kubernetes components...
	I0116 03:44:43.016143  507889 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:44:44.415462  507889 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.921726194s)
	I0116 03:44:44.415532  507889 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.921908068s)
	I0116 03:44:44.415547  507889 main.go:141] libmachine: Making call to close driver server
	I0116 03:44:44.415631  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .Close
	I0116 03:44:44.415579  507889 main.go:141] libmachine: Making call to close driver server
	I0116 03:44:44.415854  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .Close
	I0116 03:44:44.416266  507889 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:44:44.416376  507889 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:44:44.416393  507889 main.go:141] libmachine: Making call to close driver server
	I0116 03:44:44.416424  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .Close
	I0116 03:44:44.416310  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | Closing plugin on server side
	I0116 03:44:44.416310  507889 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:44:44.416595  507889 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:44:44.416658  507889 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:44:44.416671  507889 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:44:44.416977  507889 main.go:141] libmachine: Making call to close driver server
	I0116 03:44:44.417014  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .Close
	I0116 03:44:44.416332  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | Closing plugin on server side
	I0116 03:44:44.417305  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | Closing plugin on server side
	I0116 03:44:44.417358  507889 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:44:44.417375  507889 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:44:44.450870  507889 main.go:141] libmachine: Making call to close driver server
	I0116 03:44:44.450908  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .Close
	I0116 03:44:44.451327  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | Closing plugin on server side
	I0116 03:44:44.451367  507889 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:44:44.451378  507889 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:44:44.496654  507889 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.671338305s)
	I0116 03:44:44.496732  507889 main.go:141] libmachine: Making call to close driver server
	I0116 03:44:44.496744  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .Close
	I0116 03:44:44.496678  507889 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.480503621s)
	I0116 03:44:44.496845  507889 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-434445" to be "Ready" ...
	I0116 03:44:44.497092  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | Closing plugin on server side
	I0116 03:44:44.497088  507889 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:44:44.497166  507889 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:44:44.497188  507889 main.go:141] libmachine: Making call to close driver server
	I0116 03:44:44.497198  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .Close
	I0116 03:44:44.497445  507889 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:44:44.497489  507889 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:44:44.497499  507889 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-434445"
	I0116 03:44:44.497502  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | Closing plugin on server side
	I0116 03:44:44.500234  507889 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0116 03:44:42.355473  507510 retry.go:31] will retry after 6.423018357s: kubelet not initialised
	I0116 03:44:42.543045  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:44.974520  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:41.280410  507257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem --> /usr/share/ca-certificates/4754782.pem (1708 bytes)
	I0116 03:44:41.488388  507257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 03:44:41.515741  507257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478.pem --> /usr/share/ca-certificates/475478.pem (1338 bytes)
	I0116 03:44:41.541744  507257 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0116 03:44:41.564056  507257 ssh_runner.go:195] Run: openssl version
	I0116 03:44:41.571197  507257 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 03:44:41.586430  507257 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:44:41.592334  507257 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 02:35 /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:44:41.592405  507257 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:44:41.599013  507257 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 03:44:41.612793  507257 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/475478.pem && ln -fs /usr/share/ca-certificates/475478.pem /etc/ssl/certs/475478.pem"
	I0116 03:44:41.624554  507257 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/475478.pem
	I0116 03:44:41.629558  507257 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 02:44 /usr/share/ca-certificates/475478.pem
	I0116 03:44:41.629643  507257 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/475478.pem
	I0116 03:44:41.635518  507257 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/475478.pem /etc/ssl/certs/51391683.0"
	I0116 03:44:41.649567  507257 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4754782.pem && ln -fs /usr/share/ca-certificates/4754782.pem /etc/ssl/certs/4754782.pem"
	I0116 03:44:41.662276  507257 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4754782.pem
	I0116 03:44:41.667618  507257 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 02:44 /usr/share/ca-certificates/4754782.pem
	I0116 03:44:41.667699  507257 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4754782.pem
	I0116 03:44:41.678158  507257 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4754782.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 03:44:41.692147  507257 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 03:44:41.698226  507257 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0116 03:44:41.706563  507257 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0116 03:44:41.713387  507257 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0116 03:44:41.721243  507257 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0116 03:44:41.728346  507257 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0116 03:44:41.735446  507257 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0116 03:44:41.743670  507257 kubeadm.go:404] StartCluster: {Name:embed-certs-615980 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-615980 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.159 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 03:44:41.743786  507257 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0116 03:44:41.743860  507257 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 03:44:41.799605  507257 cri.go:89] found id: ""
	I0116 03:44:41.799700  507257 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0116 03:44:41.812356  507257 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0116 03:44:41.812388  507257 kubeadm.go:636] restartCluster start
	I0116 03:44:41.812457  507257 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0116 03:44:41.823906  507257 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:41.825131  507257 kubeconfig.go:92] found "embed-certs-615980" server: "https://192.168.72.159:8443"
	I0116 03:44:41.827484  507257 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0116 03:44:41.838289  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:41.838386  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:41.852927  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:42.338430  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:42.338548  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:42.353029  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:42.838419  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:42.838526  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:42.854254  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:43.338802  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:43.338934  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:43.356427  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:43.839009  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:43.839103  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:43.853265  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:44.338711  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:44.338803  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:44.353364  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:44.838956  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:44.839070  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:44.851711  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:45.339282  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:45.339397  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:45.354275  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:45.838803  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:45.838899  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:45.853557  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:44.501958  507889 addons.go:505] enable addons completed in 2.957229306s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0116 03:44:46.502807  507889 node_ready.go:58] node "default-k8s-diff-port-434445" has status "Ready":"False"
	I0116 03:44:48.786485  507510 retry.go:31] will retry after 18.441149821s: kubelet not initialised
	I0116 03:44:46.975660  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:48.981964  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:46.339198  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:46.339328  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:46.356092  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:46.839356  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:46.839461  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:46.857070  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:47.338405  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:47.338546  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:47.354976  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:47.839369  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:47.839468  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:47.854465  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:48.339102  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:48.339217  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:48.352361  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:48.838853  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:48.838968  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:48.853271  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:49.338643  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:49.338751  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:49.353674  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:49.839214  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:49.839309  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:49.852699  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:50.339060  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:50.339186  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:50.353143  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:50.838646  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:50.838782  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:50.852767  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:48.005726  507889 node_ready.go:49] node "default-k8s-diff-port-434445" has status "Ready":"True"
	I0116 03:44:48.005760  507889 node_ready.go:38] duration metric: took 3.508890685s waiting for node "default-k8s-diff-port-434445" to be "Ready" ...
	I0116 03:44:48.005775  507889 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:44:48.015385  507889 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-f8shl" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:48.027358  507889 pod_ready.go:92] pod "coredns-5dd5756b68-f8shl" in "kube-system" namespace has status "Ready":"True"
	I0116 03:44:48.027383  507889 pod_ready.go:81] duration metric: took 11.966322ms waiting for pod "coredns-5dd5756b68-f8shl" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:48.027397  507889 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-pmx8n" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:48.034156  507889 pod_ready.go:92] pod "coredns-5dd5756b68-pmx8n" in "kube-system" namespace has status "Ready":"True"
	I0116 03:44:48.034179  507889 pod_ready.go:81] duration metric: took 6.775784ms waiting for pod "coredns-5dd5756b68-pmx8n" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:48.034188  507889 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-434445" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:48.039933  507889 pod_ready.go:92] pod "etcd-default-k8s-diff-port-434445" in "kube-system" namespace has status "Ready":"True"
	I0116 03:44:48.039954  507889 pod_ready.go:81] duration metric: took 5.758946ms waiting for pod "etcd-default-k8s-diff-port-434445" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:48.039964  507889 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-434445" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:48.045351  507889 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-434445" in "kube-system" namespace has status "Ready":"True"
	I0116 03:44:48.045376  507889 pod_ready.go:81] duration metric: took 5.405684ms waiting for pod "kube-apiserver-default-k8s-diff-port-434445" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:48.045386  507889 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-434445" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:48.413479  507889 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-434445" in "kube-system" namespace has status "Ready":"True"
	I0116 03:44:48.413508  507889 pod_ready.go:81] duration metric: took 368.114361ms waiting for pod "kube-controller-manager-default-k8s-diff-port-434445" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:48.413522  507889 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dcbqg" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:48.808095  507889 pod_ready.go:92] pod "kube-proxy-dcbqg" in "kube-system" namespace has status "Ready":"True"
	I0116 03:44:48.808132  507889 pod_ready.go:81] duration metric: took 394.600854ms waiting for pod "kube-proxy-dcbqg" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:48.808147  507889 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-434445" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:50.817248  507889 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-434445" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:51.474904  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:53.475529  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:55.475807  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:51.339105  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:51.339225  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:51.352821  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:51.838856  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:51.838985  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:51.852211  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:51.852258  507257 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0116 03:44:51.852271  507257 kubeadm.go:1135] stopping kube-system containers ...
	I0116 03:44:51.852289  507257 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0116 03:44:51.852360  507257 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 03:44:51.897049  507257 cri.go:89] found id: ""
	I0116 03:44:51.897139  507257 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0116 03:44:51.915124  507257 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 03:44:51.926221  507257 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 03:44:51.926311  507257 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 03:44:51.938314  507257 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0116 03:44:51.938358  507257 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:44:52.077173  507257 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:44:52.733999  507257 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:44:52.971172  507257 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:44:53.063705  507257 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:44:53.200256  507257 api_server.go:52] waiting for apiserver process to appear ...
	I0116 03:44:53.200364  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:44:53.701337  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:44:54.201266  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:44:54.700485  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:44:55.200720  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:44:55.701348  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:44:55.725792  507257 api_server.go:72] duration metric: took 2.52553608s to wait for apiserver process to appear ...
	I0116 03:44:55.725826  507257 api_server.go:88] waiting for apiserver healthz status ...
	I0116 03:44:55.725851  507257 api_server.go:253] Checking apiserver healthz at https://192.168.72.159:8443/healthz ...
	I0116 03:44:52.317689  507889 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-434445" in "kube-system" namespace has status "Ready":"True"
	I0116 03:44:52.317718  507889 pod_ready.go:81] duration metric: took 3.509561404s waiting for pod "kube-scheduler-default-k8s-diff-port-434445" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:52.317731  507889 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:54.326412  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:56.327634  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:57.974017  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:59.977499  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:59.850423  507257 api_server.go:279] https://192.168.72.159:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 03:44:59.850456  507257 api_server.go:103] status: https://192.168.72.159:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 03:44:59.850471  507257 api_server.go:253] Checking apiserver healthz at https://192.168.72.159:8443/healthz ...
	I0116 03:44:59.998251  507257 api_server.go:279] https://192.168.72.159:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 03:44:59.998310  507257 api_server.go:103] status: https://192.168.72.159:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 03:45:00.226594  507257 api_server.go:253] Checking apiserver healthz at https://192.168.72.159:8443/healthz ...
	I0116 03:45:00.233826  507257 api_server.go:279] https://192.168.72.159:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 03:45:00.233876  507257 api_server.go:103] status: https://192.168.72.159:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 03:45:00.726919  507257 api_server.go:253] Checking apiserver healthz at https://192.168.72.159:8443/healthz ...
	I0116 03:45:00.732711  507257 api_server.go:279] https://192.168.72.159:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 03:45:00.732748  507257 api_server.go:103] status: https://192.168.72.159:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 03:45:01.226693  507257 api_server.go:253] Checking apiserver healthz at https://192.168.72.159:8443/healthz ...
	I0116 03:45:01.232420  507257 api_server.go:279] https://192.168.72.159:8443/healthz returned 200:
	ok
	I0116 03:45:01.242029  507257 api_server.go:141] control plane version: v1.28.4
	I0116 03:45:01.242078  507257 api_server.go:131] duration metric: took 5.516243097s to wait for apiserver health ...
	I0116 03:45:01.242092  507257 cni.go:84] Creating CNI manager for ""
	I0116 03:45:01.242101  507257 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:45:01.244395  507257 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 03:45:01.246155  507257 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 03:44:58.827760  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:01.327190  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:02.475858  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:04.974991  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:01.270205  507257 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 03:45:01.350402  507257 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 03:45:01.384475  507257 system_pods.go:59] 8 kube-system pods found
	I0116 03:45:01.384536  507257 system_pods.go:61] "coredns-5dd5756b68-ddjkl" [fe342d2a-7d12-4b37-be29-c0d77b920964] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0116 03:45:01.384549  507257 system_pods.go:61] "etcd-embed-certs-615980" [7b7af2e1-b3bb-4c47-862b-838167453939] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0116 03:45:01.384562  507257 system_pods.go:61] "kube-apiserver-embed-certs-615980" [bb883c31-8391-467f-9b4a-affb05a56d49] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0116 03:45:01.384571  507257 system_pods.go:61] "kube-controller-manager-embed-certs-615980" [74f7c5e3-818c-4e15-b693-d4f81308bf9d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0116 03:45:01.384584  507257 system_pods.go:61] "kube-proxy-6jpr7" [e62c9202-8b4f-4fe7-8aa4-b931afd4b028] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0116 03:45:01.384602  507257 system_pods.go:61] "kube-scheduler-embed-certs-615980" [f03d5c9c-af6a-437b-92bb-7c5a46259bbd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0116 03:45:01.384618  507257 system_pods.go:61] "metrics-server-57f55c9bc5-48gnw" [1fcb32b6-f985-428d-8f02-1198d704d8c7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:45:01.384632  507257 system_pods.go:61] "storage-provisioner" [6264adaa-89e8-4f0d-9394-d7325338a2f5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0116 03:45:01.384642  507257 system_pods.go:74] duration metric: took 34.114711ms to wait for pod list to return data ...
	I0116 03:45:01.384656  507257 node_conditions.go:102] verifying NodePressure condition ...
	I0116 03:45:01.392555  507257 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 03:45:01.392597  507257 node_conditions.go:123] node cpu capacity is 2
	I0116 03:45:01.392614  507257 node_conditions.go:105] duration metric: took 7.946538ms to run NodePressure ...
	I0116 03:45:01.392644  507257 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:45:01.788178  507257 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0116 03:45:01.795913  507257 kubeadm.go:787] kubelet initialised
	I0116 03:45:01.795945  507257 kubeadm.go:788] duration metric: took 7.737644ms waiting for restarted kubelet to initialise ...
	I0116 03:45:01.795955  507257 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:45:01.806433  507257 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-ddjkl" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:03.815645  507257 pod_ready.go:102] pod "coredns-5dd5756b68-ddjkl" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:05.821193  507257 pod_ready.go:92] pod "coredns-5dd5756b68-ddjkl" in "kube-system" namespace has status "Ready":"True"
	I0116 03:45:05.821231  507257 pod_ready.go:81] duration metric: took 4.014760393s waiting for pod "coredns-5dd5756b68-ddjkl" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:05.821245  507257 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-615980" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:03.825695  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:05.826742  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:07.234109  507510 kubeadm.go:787] kubelet initialised
	I0116 03:45:07.234137  507510 kubeadm.go:788] duration metric: took 45.657540747s waiting for restarted kubelet to initialise ...
	I0116 03:45:07.234145  507510 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:45:07.239858  507510 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-7q4wc" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:07.247210  507510 pod_ready.go:92] pod "coredns-5644d7b6d9-7q4wc" in "kube-system" namespace has status "Ready":"True"
	I0116 03:45:07.247237  507510 pod_ready.go:81] duration metric: took 7.336988ms waiting for pod "coredns-5644d7b6d9-7q4wc" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:07.247249  507510 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-q2gxq" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:07.252865  507510 pod_ready.go:92] pod "coredns-5644d7b6d9-q2gxq" in "kube-system" namespace has status "Ready":"True"
	I0116 03:45:07.252900  507510 pod_ready.go:81] duration metric: took 5.642204ms waiting for pod "coredns-5644d7b6d9-q2gxq" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:07.252925  507510 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-696770" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:07.259169  507510 pod_ready.go:92] pod "etcd-old-k8s-version-696770" in "kube-system" namespace has status "Ready":"True"
	I0116 03:45:07.259193  507510 pod_ready.go:81] duration metric: took 6.260142ms waiting for pod "etcd-old-k8s-version-696770" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:07.259202  507510 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-696770" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:07.264591  507510 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-696770" in "kube-system" namespace has status "Ready":"True"
	I0116 03:45:07.264622  507510 pod_ready.go:81] duration metric: took 5.411866ms waiting for pod "kube-apiserver-old-k8s-version-696770" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:07.264635  507510 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-696770" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:07.632057  507510 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-696770" in "kube-system" namespace has status "Ready":"True"
	I0116 03:45:07.632093  507510 pod_ready.go:81] duration metric: took 367.447202ms waiting for pod "kube-controller-manager-old-k8s-version-696770" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:07.632110  507510 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-9pfdj" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:08.033002  507510 pod_ready.go:92] pod "kube-proxy-9pfdj" in "kube-system" namespace has status "Ready":"True"
	I0116 03:45:08.033028  507510 pod_ready.go:81] duration metric: took 400.910907ms waiting for pod "kube-proxy-9pfdj" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:08.033038  507510 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-696770" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:08.433134  507510 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-696770" in "kube-system" namespace has status "Ready":"True"
	I0116 03:45:08.433165  507510 pod_ready.go:81] duration metric: took 400.1203ms waiting for pod "kube-scheduler-old-k8s-version-696770" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:08.433180  507510 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:07.485372  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:09.979593  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:07.830703  507257 pod_ready.go:102] pod "etcd-embed-certs-615980" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:10.328466  507257 pod_ready.go:102] pod "etcd-embed-certs-615980" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:08.325925  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:10.825155  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:10.442598  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:12.941713  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:12.478975  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:14.480154  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:12.329199  507257 pod_ready.go:102] pod "etcd-embed-certs-615980" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:13.830177  507257 pod_ready.go:92] pod "etcd-embed-certs-615980" in "kube-system" namespace has status "Ready":"True"
	I0116 03:45:13.830207  507257 pod_ready.go:81] duration metric: took 8.008954008s waiting for pod "etcd-embed-certs-615980" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:13.830217  507257 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-615980" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:13.837420  507257 pod_ready.go:92] pod "kube-apiserver-embed-certs-615980" in "kube-system" namespace has status "Ready":"True"
	I0116 03:45:13.837448  507257 pod_ready.go:81] duration metric: took 7.22323ms waiting for pod "kube-apiserver-embed-certs-615980" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:13.837461  507257 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-615980" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:13.845996  507257 pod_ready.go:92] pod "kube-controller-manager-embed-certs-615980" in "kube-system" namespace has status "Ready":"True"
	I0116 03:45:13.846029  507257 pod_ready.go:81] duration metric: took 8.558317ms waiting for pod "kube-controller-manager-embed-certs-615980" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:13.846040  507257 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-6jpr7" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:13.852645  507257 pod_ready.go:92] pod "kube-proxy-6jpr7" in "kube-system" namespace has status "Ready":"True"
	I0116 03:45:13.852674  507257 pod_ready.go:81] duration metric: took 6.627181ms waiting for pod "kube-proxy-6jpr7" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:13.852683  507257 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-615980" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:13.858818  507257 pod_ready.go:92] pod "kube-scheduler-embed-certs-615980" in "kube-system" namespace has status "Ready":"True"
	I0116 03:45:13.858844  507257 pod_ready.go:81] duration metric: took 6.154319ms waiting for pod "kube-scheduler-embed-certs-615980" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:13.858853  507257 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:15.867133  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:12.826463  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:14.826507  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:14.942079  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:17.442566  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:16.976095  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:19.477899  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:17.868381  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:20.367064  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:17.326184  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:19.328194  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:19.942113  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:21.942853  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:24.441140  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:21.975337  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:24.474400  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:22.368008  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:24.866716  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:21.825428  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:23.825828  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:25.829356  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:26.441756  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:28.443869  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:26.475939  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:28.476308  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:26.866760  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:29.367575  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:28.326756  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:30.825813  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:30.942631  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:33.440480  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:30.975870  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:33.475828  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:31.866401  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:33.867719  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:33.325388  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:35.325485  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:35.939804  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:37.940883  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:35.974504  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:37.975857  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:39.977413  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:36.367513  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:38.865702  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:40.866834  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:37.325804  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:39.326635  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:40.440287  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:42.440838  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:44.441037  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:42.475940  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:44.981122  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:42.867673  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:45.368285  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:41.825982  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:43.826700  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:45.828002  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:46.443286  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:48.941625  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:47.474621  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:49.475149  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:47.867135  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:49.867865  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:48.326035  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:50.327538  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:50.943718  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:53.443986  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:51.977212  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:54.477161  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:52.368444  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:54.375089  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:52.826163  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:55.327160  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:55.940561  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:57.942988  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:56.975470  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:58.975829  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:56.867648  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:59.367479  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:57.826140  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:59.826286  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:00.440963  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:02.941202  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:00.979308  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:03.474099  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:05.478535  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:01.868806  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:04.368227  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:01.826702  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:04.325060  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:06.326882  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:05.441837  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:07.444944  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:07.975344  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:09.975486  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:06.868137  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:09.367752  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:08.329967  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:10.826182  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:09.940745  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:11.942989  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:14.441331  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:11.977171  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:13.977835  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:11.866817  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:13.867951  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:13.327232  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:15.826862  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:16.442525  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:18.442754  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:16.475367  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:18.475903  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:16.367830  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:18.368100  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:20.866302  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:18.326376  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:20.827236  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:20.940998  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:22.941332  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:20.980371  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:23.476451  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:22.868945  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:25.366857  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:23.326576  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:25.826000  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:25.442029  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:27.941061  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:25.974860  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:27.975178  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:29.978092  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:27.370097  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:29.869827  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:28.326735  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:30.826672  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:30.442579  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:32.941784  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:32.475984  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:34.973934  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:31.870772  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:34.367380  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:32.827910  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:34.828185  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:35.440418  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:37.441206  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:39.441254  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:36.974076  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:38.975169  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:36.867231  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:39.366005  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:37.327553  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:39.826218  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:41.941046  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:43.941530  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:40.976023  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:43.478194  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:41.367293  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:43.867097  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:45.867843  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:42.325426  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:44.325723  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:46.326155  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:46.441175  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:48.940677  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:45.974937  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:47.975141  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:50.474687  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:47.868006  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:49.868890  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:48.326634  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:50.326914  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:50.941220  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:53.440868  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:52.475138  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:54.475546  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:52.365917  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:54.366514  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:52.826279  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:55.324177  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:55.441130  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:57.943093  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:56.976380  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:59.478090  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:56.368894  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:58.868051  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:57.326296  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:59.326416  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:01.327894  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:00.440504  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:02.441176  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:04.442171  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:01.975498  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:03.978490  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:01.369736  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:03.871663  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:03.825943  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:05.828215  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:06.443721  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:08.940212  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:06.475354  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:08.975707  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:06.366468  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:08.366998  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:10.368019  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:08.326243  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:10.824873  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:10.942042  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:13.440495  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:11.475551  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:13.475904  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:12.867030  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:14.872409  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:12.826040  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:15.325658  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:15.941844  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:18.440574  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:15.975125  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:17.977326  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:20.474897  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:17.367390  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:19.369090  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:17.325860  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:19.829310  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:20.940407  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:22.941824  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:22.475218  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:24.477773  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:21.866953  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:23.867055  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:22.326660  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:24.327689  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:25.441214  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:27.442253  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:26.975120  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:29.477805  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:26.367295  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:28.867376  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:26.826666  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:29.327606  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:29.940650  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:31.941021  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:34.443144  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:31.978544  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:34.475301  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:31.367770  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:33.867084  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:35.870968  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:31.826565  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:34.326677  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:36.941363  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:38.942121  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:36.974797  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:38.975027  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:38.368025  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:40.866714  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:36.828347  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:39.327130  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:41.441555  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:43.442806  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:40.977172  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:43.476163  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:43.367966  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:45.867460  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:41.826087  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:43.826389  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:46.326497  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:45.941267  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:48.443875  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:45.974452  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:47.977610  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:50.475536  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:48.367053  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:50.368023  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:48.824924  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:50.825835  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:50.941125  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:52.941644  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:52.975726  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:55.476453  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:52.866871  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:55.367951  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:52.826166  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:54.826434  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:55.442084  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:57.442829  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:57.974382  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:59.974448  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:57.867742  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:00.366490  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:57.325608  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:59.825525  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:59.939515  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:01.941648  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:03.942290  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:01.975159  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:03.977002  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:02.366764  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:04.366818  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:01.831740  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:04.326341  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:06.440494  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:08.940336  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:06.475364  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:08.482783  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:06.367160  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:08.867294  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:06.825331  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:08.826594  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:11.324828  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:10.942696  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:13.441805  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:10.974798  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:12.975009  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:14.976154  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:11.366189  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:13.369852  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:15.867536  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:13.327353  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:15.825738  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:15.941304  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:17.942206  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:17.474204  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:19.475630  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:19.974269  507339 pod_ready.go:81] duration metric: took 4m0.007375913s waiting for pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace to be "Ready" ...
	E0116 03:48:19.974299  507339 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0116 03:48:19.974310  507339 pod_ready.go:38] duration metric: took 4m6.26032663s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:48:19.974365  507339 api_server.go:52] waiting for apiserver process to appear ...
	I0116 03:48:19.974431  507339 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0116 03:48:19.974529  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0116 03:48:20.042853  507339 cri.go:89] found id: "de79f87bc28449077936386f603cc5df933f63455bc96cf6ff7e7c6e4bd32bc4"
	I0116 03:48:20.042886  507339 cri.go:89] found id: ""
	I0116 03:48:20.042896  507339 logs.go:284] 1 containers: [de79f87bc28449077936386f603cc5df933f63455bc96cf6ff7e7c6e4bd32bc4]
	I0116 03:48:20.042961  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:20.049795  507339 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0116 03:48:20.049884  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0116 03:48:20.092507  507339 cri.go:89] found id: "01aaf51cd40b9137f2395404bb15b7372927f81dc8a4e884600dc8cba9bbeb8e"
	I0116 03:48:20.092541  507339 cri.go:89] found id: ""
	I0116 03:48:20.092551  507339 logs.go:284] 1 containers: [01aaf51cd40b9137f2395404bb15b7372927f81dc8a4e884600dc8cba9bbeb8e]
	I0116 03:48:20.092619  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:20.097091  507339 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0116 03:48:20.097176  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0116 03:48:20.139182  507339 cri.go:89] found id: "c13ef036a1014a6bbc5c179a0889fb6ff615988713589da58240b7c637adf687"
	I0116 03:48:20.139218  507339 cri.go:89] found id: ""
	I0116 03:48:20.139229  507339 logs.go:284] 1 containers: [c13ef036a1014a6bbc5c179a0889fb6ff615988713589da58240b7c637adf687]
	I0116 03:48:20.139297  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:20.145129  507339 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0116 03:48:20.145210  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0116 03:48:20.191055  507339 cri.go:89] found id: "33381edd7dded1b5ed158738429b4a7f1f03a6171f2af0832bb5a9864c950725"
	I0116 03:48:20.191090  507339 cri.go:89] found id: ""
	I0116 03:48:20.191098  507339 logs.go:284] 1 containers: [33381edd7dded1b5ed158738429b4a7f1f03a6171f2af0832bb5a9864c950725]
	I0116 03:48:20.191161  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:20.195688  507339 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0116 03:48:20.195765  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0116 03:48:20.242718  507339 cri.go:89] found id: "eba2964f029ac61d9cfb59dc548ae05c832f9b23713f51924291fbfc5de985ed"
	I0116 03:48:20.242746  507339 cri.go:89] found id: ""
	I0116 03:48:20.242754  507339 logs.go:284] 1 containers: [eba2964f029ac61d9cfb59dc548ae05c832f9b23713f51924291fbfc5de985ed]
	I0116 03:48:20.242819  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:20.247312  507339 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0116 03:48:20.247399  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0116 03:48:20.287981  507339 cri.go:89] found id: "802d4c55aa04370ff079f09dfe4670f5dac1142ca6db880afa17be75f1d20a76"
	I0116 03:48:20.288009  507339 cri.go:89] found id: ""
	I0116 03:48:20.288018  507339 logs.go:284] 1 containers: [802d4c55aa04370ff079f09dfe4670f5dac1142ca6db880afa17be75f1d20a76]
	I0116 03:48:20.288097  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:20.292370  507339 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0116 03:48:20.292449  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0116 03:48:20.335778  507339 cri.go:89] found id: ""
	I0116 03:48:20.335816  507339 logs.go:284] 0 containers: []
	W0116 03:48:20.335828  507339 logs.go:286] No container was found matching "kindnet"
	I0116 03:48:20.335838  507339 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0116 03:48:20.335906  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0116 03:48:20.381698  507339 cri.go:89] found id: "b7164c1b7732cf6d54642cb9ffc8d9f16696adc0e428c3fa16cf914420d73e1d"
	I0116 03:48:20.381722  507339 cri.go:89] found id: "59754e94eb3cf3fecfedb9077fbad2ecc2618c351fa9bfa3faed0d780fdd9ecb"
	I0116 03:48:20.381727  507339 cri.go:89] found id: ""
	I0116 03:48:20.381734  507339 logs.go:284] 2 containers: [b7164c1b7732cf6d54642cb9ffc8d9f16696adc0e428c3fa16cf914420d73e1d 59754e94eb3cf3fecfedb9077fbad2ecc2618c351fa9bfa3faed0d780fdd9ecb]
	I0116 03:48:20.381790  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:20.386880  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:20.391292  507339 logs.go:123] Gathering logs for describe nodes ...
	I0116 03:48:20.391324  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0116 03:48:20.528154  507339 logs.go:123] Gathering logs for storage-provisioner [b7164c1b7732cf6d54642cb9ffc8d9f16696adc0e428c3fa16cf914420d73e1d] ...
	I0116 03:48:20.528197  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b7164c1b7732cf6d54642cb9ffc8d9f16696adc0e428c3fa16cf914420d73e1d"
	I0116 03:48:20.586645  507339 logs.go:123] Gathering logs for CRI-O ...
	I0116 03:48:20.586680  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0116 03:48:18.367415  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:20.867678  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:18.325849  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:20.326141  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:20.442138  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:22.442180  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:21.096109  507339 logs.go:123] Gathering logs for kubelet ...
	I0116 03:48:21.096155  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0116 03:48:21.154531  507339 logs.go:123] Gathering logs for coredns [c13ef036a1014a6bbc5c179a0889fb6ff615988713589da58240b7c637adf687] ...
	I0116 03:48:21.154577  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c13ef036a1014a6bbc5c179a0889fb6ff615988713589da58240b7c637adf687"
	I0116 03:48:21.203708  507339 logs.go:123] Gathering logs for dmesg ...
	I0116 03:48:21.203760  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0116 03:48:21.219320  507339 logs.go:123] Gathering logs for etcd [01aaf51cd40b9137f2395404bb15b7372927f81dc8a4e884600dc8cba9bbeb8e] ...
	I0116 03:48:21.219362  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 01aaf51cd40b9137f2395404bb15b7372927f81dc8a4e884600dc8cba9bbeb8e"
	I0116 03:48:21.271759  507339 logs.go:123] Gathering logs for kube-proxy [eba2964f029ac61d9cfb59dc548ae05c832f9b23713f51924291fbfc5de985ed] ...
	I0116 03:48:21.271812  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eba2964f029ac61d9cfb59dc548ae05c832f9b23713f51924291fbfc5de985ed"
	I0116 03:48:21.316786  507339 logs.go:123] Gathering logs for kube-controller-manager [802d4c55aa04370ff079f09dfe4670f5dac1142ca6db880afa17be75f1d20a76] ...
	I0116 03:48:21.316825  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 802d4c55aa04370ff079f09dfe4670f5dac1142ca6db880afa17be75f1d20a76"
	I0116 03:48:21.383743  507339 logs.go:123] Gathering logs for storage-provisioner [59754e94eb3cf3fecfedb9077fbad2ecc2618c351fa9bfa3faed0d780fdd9ecb] ...
	I0116 03:48:21.383783  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59754e94eb3cf3fecfedb9077fbad2ecc2618c351fa9bfa3faed0d780fdd9ecb"
	I0116 03:48:21.422893  507339 logs.go:123] Gathering logs for kube-apiserver [de79f87bc28449077936386f603cc5df933f63455bc96cf6ff7e7c6e4bd32bc4] ...
	I0116 03:48:21.422926  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de79f87bc28449077936386f603cc5df933f63455bc96cf6ff7e7c6e4bd32bc4"
	I0116 03:48:21.473295  507339 logs.go:123] Gathering logs for kube-scheduler [33381edd7dded1b5ed158738429b4a7f1f03a6171f2af0832bb5a9864c950725] ...
	I0116 03:48:21.473332  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33381edd7dded1b5ed158738429b4a7f1f03a6171f2af0832bb5a9864c950725"
	I0116 03:48:21.527066  507339 logs.go:123] Gathering logs for container status ...
	I0116 03:48:21.527110  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0116 03:48:24.085743  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:48:24.105359  507339 api_server.go:72] duration metric: took 4m17.107229414s to wait for apiserver process to appear ...
	I0116 03:48:24.105395  507339 api_server.go:88] waiting for apiserver healthz status ...
	I0116 03:48:24.105450  507339 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0116 03:48:24.105567  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0116 03:48:24.154626  507339 cri.go:89] found id: "de79f87bc28449077936386f603cc5df933f63455bc96cf6ff7e7c6e4bd32bc4"
	I0116 03:48:24.154659  507339 cri.go:89] found id: ""
	I0116 03:48:24.154668  507339 logs.go:284] 1 containers: [de79f87bc28449077936386f603cc5df933f63455bc96cf6ff7e7c6e4bd32bc4]
	I0116 03:48:24.154720  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:24.159657  507339 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0116 03:48:24.159735  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0116 03:48:24.202635  507339 cri.go:89] found id: "01aaf51cd40b9137f2395404bb15b7372927f81dc8a4e884600dc8cba9bbeb8e"
	I0116 03:48:24.202663  507339 cri.go:89] found id: ""
	I0116 03:48:24.202671  507339 logs.go:284] 1 containers: [01aaf51cd40b9137f2395404bb15b7372927f81dc8a4e884600dc8cba9bbeb8e]
	I0116 03:48:24.202725  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:24.207503  507339 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0116 03:48:24.207578  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0116 03:48:24.253893  507339 cri.go:89] found id: "c13ef036a1014a6bbc5c179a0889fb6ff615988713589da58240b7c637adf687"
	I0116 03:48:24.253934  507339 cri.go:89] found id: ""
	I0116 03:48:24.253945  507339 logs.go:284] 1 containers: [c13ef036a1014a6bbc5c179a0889fb6ff615988713589da58240b7c637adf687]
	I0116 03:48:24.254016  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:24.258649  507339 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0116 03:48:24.258733  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0116 03:48:24.306636  507339 cri.go:89] found id: "33381edd7dded1b5ed158738429b4a7f1f03a6171f2af0832bb5a9864c950725"
	I0116 03:48:24.306662  507339 cri.go:89] found id: ""
	I0116 03:48:24.306670  507339 logs.go:284] 1 containers: [33381edd7dded1b5ed158738429b4a7f1f03a6171f2af0832bb5a9864c950725]
	I0116 03:48:24.306721  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:24.311270  507339 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0116 03:48:24.311357  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0116 03:48:24.354635  507339 cri.go:89] found id: "eba2964f029ac61d9cfb59dc548ae05c832f9b23713f51924291fbfc5de985ed"
	I0116 03:48:24.354671  507339 cri.go:89] found id: ""
	I0116 03:48:24.354683  507339 logs.go:284] 1 containers: [eba2964f029ac61d9cfb59dc548ae05c832f9b23713f51924291fbfc5de985ed]
	I0116 03:48:24.354756  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:24.359806  507339 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0116 03:48:24.359889  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0116 03:48:24.418188  507339 cri.go:89] found id: "802d4c55aa04370ff079f09dfe4670f5dac1142ca6db880afa17be75f1d20a76"
	I0116 03:48:24.418239  507339 cri.go:89] found id: ""
	I0116 03:48:24.418251  507339 logs.go:284] 1 containers: [802d4c55aa04370ff079f09dfe4670f5dac1142ca6db880afa17be75f1d20a76]
	I0116 03:48:24.418330  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:24.422943  507339 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0116 03:48:24.423030  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0116 03:48:24.467349  507339 cri.go:89] found id: ""
	I0116 03:48:24.467383  507339 logs.go:284] 0 containers: []
	W0116 03:48:24.467394  507339 logs.go:286] No container was found matching "kindnet"
	I0116 03:48:24.467403  507339 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0116 03:48:24.467466  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0116 03:48:24.517490  507339 cri.go:89] found id: "b7164c1b7732cf6d54642cb9ffc8d9f16696adc0e428c3fa16cf914420d73e1d"
	I0116 03:48:24.517525  507339 cri.go:89] found id: "59754e94eb3cf3fecfedb9077fbad2ecc2618c351fa9bfa3faed0d780fdd9ecb"
	I0116 03:48:24.517539  507339 cri.go:89] found id: ""
	I0116 03:48:24.517548  507339 logs.go:284] 2 containers: [b7164c1b7732cf6d54642cb9ffc8d9f16696adc0e428c3fa16cf914420d73e1d 59754e94eb3cf3fecfedb9077fbad2ecc2618c351fa9bfa3faed0d780fdd9ecb]
	I0116 03:48:24.517619  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:24.521952  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:24.526246  507339 logs.go:123] Gathering logs for kube-apiserver [de79f87bc28449077936386f603cc5df933f63455bc96cf6ff7e7c6e4bd32bc4] ...
	I0116 03:48:24.526277  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de79f87bc28449077936386f603cc5df933f63455bc96cf6ff7e7c6e4bd32bc4"
	I0116 03:48:24.583067  507339 logs.go:123] Gathering logs for kube-proxy [eba2964f029ac61d9cfb59dc548ae05c832f9b23713f51924291fbfc5de985ed] ...
	I0116 03:48:24.583108  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eba2964f029ac61d9cfb59dc548ae05c832f9b23713f51924291fbfc5de985ed"
	I0116 03:48:24.631278  507339 logs.go:123] Gathering logs for CRI-O ...
	I0116 03:48:24.631312  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0116 03:48:25.099279  507339 logs.go:123] Gathering logs for describe nodes ...
	I0116 03:48:25.099330  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0116 03:48:25.241388  507339 logs.go:123] Gathering logs for etcd [01aaf51cd40b9137f2395404bb15b7372927f81dc8a4e884600dc8cba9bbeb8e] ...
	I0116 03:48:25.241433  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 01aaf51cd40b9137f2395404bb15b7372927f81dc8a4e884600dc8cba9bbeb8e"
	I0116 03:48:25.298748  507339 logs.go:123] Gathering logs for storage-provisioner [59754e94eb3cf3fecfedb9077fbad2ecc2618c351fa9bfa3faed0d780fdd9ecb] ...
	I0116 03:48:25.298787  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59754e94eb3cf3fecfedb9077fbad2ecc2618c351fa9bfa3faed0d780fdd9ecb"
	I0116 03:48:25.338169  507339 logs.go:123] Gathering logs for kubelet ...
	I0116 03:48:25.338204  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0116 03:48:25.396275  507339 logs.go:123] Gathering logs for coredns [c13ef036a1014a6bbc5c179a0889fb6ff615988713589da58240b7c637adf687] ...
	I0116 03:48:25.396320  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c13ef036a1014a6bbc5c179a0889fb6ff615988713589da58240b7c637adf687"
	I0116 03:48:25.448028  507339 logs.go:123] Gathering logs for storage-provisioner [b7164c1b7732cf6d54642cb9ffc8d9f16696adc0e428c3fa16cf914420d73e1d] ...
	I0116 03:48:25.448087  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b7164c1b7732cf6d54642cb9ffc8d9f16696adc0e428c3fa16cf914420d73e1d"
	I0116 03:48:25.492640  507339 logs.go:123] Gathering logs for container status ...
	I0116 03:48:25.492673  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0116 03:48:25.541478  507339 logs.go:123] Gathering logs for dmesg ...
	I0116 03:48:25.541572  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0116 03:48:25.557537  507339 logs.go:123] Gathering logs for kube-scheduler [33381edd7dded1b5ed158738429b4a7f1f03a6171f2af0832bb5a9864c950725] ...
	I0116 03:48:25.557569  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33381edd7dded1b5ed158738429b4a7f1f03a6171f2af0832bb5a9864c950725"
	I0116 03:48:25.599921  507339 logs.go:123] Gathering logs for kube-controller-manager [802d4c55aa04370ff079f09dfe4670f5dac1142ca6db880afa17be75f1d20a76] ...
	I0116 03:48:25.599956  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 802d4c55aa04370ff079f09dfe4670f5dac1142ca6db880afa17be75f1d20a76"
	I0116 03:48:23.368308  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:25.368495  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:22.825098  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:24.827094  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:24.942708  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:27.441008  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:29.452010  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:28.158281  507339 api_server.go:253] Checking apiserver healthz at https://192.168.39.103:8443/healthz ...
	I0116 03:48:28.165500  507339 api_server.go:279] https://192.168.39.103:8443/healthz returned 200:
	ok
	I0116 03:48:28.166907  507339 api_server.go:141] control plane version: v1.29.0-rc.2
	I0116 03:48:28.166933  507339 api_server.go:131] duration metric: took 4.061531357s to wait for apiserver health ...
	I0116 03:48:28.166943  507339 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 03:48:28.166996  507339 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0116 03:48:28.167056  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0116 03:48:28.209247  507339 cri.go:89] found id: "de79f87bc28449077936386f603cc5df933f63455bc96cf6ff7e7c6e4bd32bc4"
	I0116 03:48:28.209282  507339 cri.go:89] found id: ""
	I0116 03:48:28.209295  507339 logs.go:284] 1 containers: [de79f87bc28449077936386f603cc5df933f63455bc96cf6ff7e7c6e4bd32bc4]
	I0116 03:48:28.209361  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:28.214044  507339 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0116 03:48:28.214126  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0116 03:48:28.263791  507339 cri.go:89] found id: "01aaf51cd40b9137f2395404bb15b7372927f81dc8a4e884600dc8cba9bbeb8e"
	I0116 03:48:28.263817  507339 cri.go:89] found id: ""
	I0116 03:48:28.263825  507339 logs.go:284] 1 containers: [01aaf51cd40b9137f2395404bb15b7372927f81dc8a4e884600dc8cba9bbeb8e]
	I0116 03:48:28.263889  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:28.268803  507339 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0116 03:48:28.268893  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0116 03:48:28.311035  507339 cri.go:89] found id: "c13ef036a1014a6bbc5c179a0889fb6ff615988713589da58240b7c637adf687"
	I0116 03:48:28.311062  507339 cri.go:89] found id: ""
	I0116 03:48:28.311070  507339 logs.go:284] 1 containers: [c13ef036a1014a6bbc5c179a0889fb6ff615988713589da58240b7c637adf687]
	I0116 03:48:28.311132  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:28.315791  507339 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0116 03:48:28.315871  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0116 03:48:28.366917  507339 cri.go:89] found id: "33381edd7dded1b5ed158738429b4a7f1f03a6171f2af0832bb5a9864c950725"
	I0116 03:48:28.366947  507339 cri.go:89] found id: ""
	I0116 03:48:28.366957  507339 logs.go:284] 1 containers: [33381edd7dded1b5ed158738429b4a7f1f03a6171f2af0832bb5a9864c950725]
	I0116 03:48:28.367028  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:28.372648  507339 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0116 03:48:28.372723  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0116 03:48:28.415530  507339 cri.go:89] found id: "eba2964f029ac61d9cfb59dc548ae05c832f9b23713f51924291fbfc5de985ed"
	I0116 03:48:28.415566  507339 cri.go:89] found id: ""
	I0116 03:48:28.415577  507339 logs.go:284] 1 containers: [eba2964f029ac61d9cfb59dc548ae05c832f9b23713f51924291fbfc5de985ed]
	I0116 03:48:28.415669  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:28.420784  507339 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0116 03:48:28.420865  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0116 03:48:28.474238  507339 cri.go:89] found id: "802d4c55aa04370ff079f09dfe4670f5dac1142ca6db880afa17be75f1d20a76"
	I0116 03:48:28.474262  507339 cri.go:89] found id: ""
	I0116 03:48:28.474270  507339 logs.go:284] 1 containers: [802d4c55aa04370ff079f09dfe4670f5dac1142ca6db880afa17be75f1d20a76]
	I0116 03:48:28.474335  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:28.479547  507339 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0116 03:48:28.479637  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0116 03:48:28.526403  507339 cri.go:89] found id: ""
	I0116 03:48:28.526436  507339 logs.go:284] 0 containers: []
	W0116 03:48:28.526455  507339 logs.go:286] No container was found matching "kindnet"
	I0116 03:48:28.526466  507339 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0116 03:48:28.526535  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0116 03:48:28.572958  507339 cri.go:89] found id: "b7164c1b7732cf6d54642cb9ffc8d9f16696adc0e428c3fa16cf914420d73e1d"
	I0116 03:48:28.572988  507339 cri.go:89] found id: "59754e94eb3cf3fecfedb9077fbad2ecc2618c351fa9bfa3faed0d780fdd9ecb"
	I0116 03:48:28.572994  507339 cri.go:89] found id: ""
	I0116 03:48:28.573002  507339 logs.go:284] 2 containers: [b7164c1b7732cf6d54642cb9ffc8d9f16696adc0e428c3fa16cf914420d73e1d 59754e94eb3cf3fecfedb9077fbad2ecc2618c351fa9bfa3faed0d780fdd9ecb]
	I0116 03:48:28.573064  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:28.579388  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:28.585318  507339 logs.go:123] Gathering logs for kube-apiserver [de79f87bc28449077936386f603cc5df933f63455bc96cf6ff7e7c6e4bd32bc4] ...
	I0116 03:48:28.585355  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de79f87bc28449077936386f603cc5df933f63455bc96cf6ff7e7c6e4bd32bc4"
	I0116 03:48:28.640376  507339 logs.go:123] Gathering logs for kube-controller-manager [802d4c55aa04370ff079f09dfe4670f5dac1142ca6db880afa17be75f1d20a76] ...
	I0116 03:48:28.640419  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 802d4c55aa04370ff079f09dfe4670f5dac1142ca6db880afa17be75f1d20a76"
	I0116 03:48:28.701292  507339 logs.go:123] Gathering logs for storage-provisioner [59754e94eb3cf3fecfedb9077fbad2ecc2618c351fa9bfa3faed0d780fdd9ecb] ...
	I0116 03:48:28.701332  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59754e94eb3cf3fecfedb9077fbad2ecc2618c351fa9bfa3faed0d780fdd9ecb"
	I0116 03:48:28.744571  507339 logs.go:123] Gathering logs for container status ...
	I0116 03:48:28.744605  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0116 03:48:28.794905  507339 logs.go:123] Gathering logs for kubelet ...
	I0116 03:48:28.794942  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0116 03:48:28.847687  507339 logs.go:123] Gathering logs for dmesg ...
	I0116 03:48:28.847736  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0116 03:48:28.861641  507339 logs.go:123] Gathering logs for describe nodes ...
	I0116 03:48:28.861690  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0116 03:48:29.036673  507339 logs.go:123] Gathering logs for kube-proxy [eba2964f029ac61d9cfb59dc548ae05c832f9b23713f51924291fbfc5de985ed] ...
	I0116 03:48:29.036709  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eba2964f029ac61d9cfb59dc548ae05c832f9b23713f51924291fbfc5de985ed"
	I0116 03:48:29.084792  507339 logs.go:123] Gathering logs for CRI-O ...
	I0116 03:48:29.084823  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0116 03:48:29.449656  507339 logs.go:123] Gathering logs for etcd [01aaf51cd40b9137f2395404bb15b7372927f81dc8a4e884600dc8cba9bbeb8e] ...
	I0116 03:48:29.449707  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 01aaf51cd40b9137f2395404bb15b7372927f81dc8a4e884600dc8cba9bbeb8e"
	I0116 03:48:29.502412  507339 logs.go:123] Gathering logs for storage-provisioner [b7164c1b7732cf6d54642cb9ffc8d9f16696adc0e428c3fa16cf914420d73e1d] ...
	I0116 03:48:29.502460  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b7164c1b7732cf6d54642cb9ffc8d9f16696adc0e428c3fa16cf914420d73e1d"
	I0116 03:48:29.546471  507339 logs.go:123] Gathering logs for coredns [c13ef036a1014a6bbc5c179a0889fb6ff615988713589da58240b7c637adf687] ...
	I0116 03:48:29.546520  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c13ef036a1014a6bbc5c179a0889fb6ff615988713589da58240b7c637adf687"
	I0116 03:48:29.594282  507339 logs.go:123] Gathering logs for kube-scheduler [33381edd7dded1b5ed158738429b4a7f1f03a6171f2af0832bb5a9864c950725] ...
	I0116 03:48:29.594329  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33381edd7dded1b5ed158738429b4a7f1f03a6171f2af0832bb5a9864c950725"
	I0116 03:48:27.867485  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:29.868504  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:27.324987  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:29.325330  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:31.329373  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:32.146165  507339 system_pods.go:59] 8 kube-system pods found
	I0116 03:48:32.146209  507339 system_pods.go:61] "coredns-76f75df574-lr95b" [15dc0b11-f7ec-4729-bbfa-79b9649fbad6] Running
	I0116 03:48:32.146218  507339 system_pods.go:61] "etcd-no-preload-666547" [f98dcc57-5869-4a35-b443-2e6c9beddd88] Running
	I0116 03:48:32.146225  507339 system_pods.go:61] "kube-apiserver-no-preload-666547" [3aceae9d-224a-4e8c-a02d-ad4199d8d558] Running
	I0116 03:48:32.146232  507339 system_pods.go:61] "kube-controller-manager-no-preload-666547" [af39c89c-3b41-4d6f-b075-16de04a3ecc0] Running
	I0116 03:48:32.146238  507339 system_pods.go:61] "kube-proxy-dcmrn" [1e91c96f-cbc5-424d-a09e-06e34bf7a2e2] Running
	I0116 03:48:32.146244  507339 system_pods.go:61] "kube-scheduler-no-preload-666547" [0f2aa713-aebc-4858-a348-32169021235e] Running
	I0116 03:48:32.146253  507339 system_pods.go:61] "metrics-server-57f55c9bc5-78vfj" [dbd2d3b2-ec0f-4253-8549-7c4299522c37] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:48:32.146261  507339 system_pods.go:61] "storage-provisioner" [f4e1ba45-217d-41d5-b583-2f60044879bc] Running
	I0116 03:48:32.146272  507339 system_pods.go:74] duration metric: took 3.979321091s to wait for pod list to return data ...
	I0116 03:48:32.146286  507339 default_sa.go:34] waiting for default service account to be created ...
	I0116 03:48:32.149674  507339 default_sa.go:45] found service account: "default"
	I0116 03:48:32.149702  507339 default_sa.go:55] duration metric: took 3.408362ms for default service account to be created ...
	I0116 03:48:32.149710  507339 system_pods.go:116] waiting for k8s-apps to be running ...
	I0116 03:48:32.160459  507339 system_pods.go:86] 8 kube-system pods found
	I0116 03:48:32.160495  507339 system_pods.go:89] "coredns-76f75df574-lr95b" [15dc0b11-f7ec-4729-bbfa-79b9649fbad6] Running
	I0116 03:48:32.160503  507339 system_pods.go:89] "etcd-no-preload-666547" [f98dcc57-5869-4a35-b443-2e6c9beddd88] Running
	I0116 03:48:32.160510  507339 system_pods.go:89] "kube-apiserver-no-preload-666547" [3aceae9d-224a-4e8c-a02d-ad4199d8d558] Running
	I0116 03:48:32.160518  507339 system_pods.go:89] "kube-controller-manager-no-preload-666547" [af39c89c-3b41-4d6f-b075-16de04a3ecc0] Running
	I0116 03:48:32.160524  507339 system_pods.go:89] "kube-proxy-dcmrn" [1e91c96f-cbc5-424d-a09e-06e34bf7a2e2] Running
	I0116 03:48:32.160529  507339 system_pods.go:89] "kube-scheduler-no-preload-666547" [0f2aa713-aebc-4858-a348-32169021235e] Running
	I0116 03:48:32.160540  507339 system_pods.go:89] "metrics-server-57f55c9bc5-78vfj" [dbd2d3b2-ec0f-4253-8549-7c4299522c37] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:48:32.160548  507339 system_pods.go:89] "storage-provisioner" [f4e1ba45-217d-41d5-b583-2f60044879bc] Running
	I0116 03:48:32.160560  507339 system_pods.go:126] duration metric: took 10.843124ms to wait for k8s-apps to be running ...
	I0116 03:48:32.160569  507339 system_svc.go:44] waiting for kubelet service to be running ....
	I0116 03:48:32.160629  507339 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:48:32.179349  507339 system_svc.go:56] duration metric: took 18.767357ms WaitForService to wait for kubelet.
	I0116 03:48:32.179391  507339 kubeadm.go:581] duration metric: took 4m25.181271548s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0116 03:48:32.179426  507339 node_conditions.go:102] verifying NodePressure condition ...
	I0116 03:48:32.185135  507339 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 03:48:32.185165  507339 node_conditions.go:123] node cpu capacity is 2
	I0116 03:48:32.185198  507339 node_conditions.go:105] duration metric: took 5.766084ms to run NodePressure ...
	I0116 03:48:32.185219  507339 start.go:228] waiting for startup goroutines ...
	I0116 03:48:32.185228  507339 start.go:233] waiting for cluster config update ...
	I0116 03:48:32.185269  507339 start.go:242] writing updated cluster config ...
	I0116 03:48:32.185860  507339 ssh_runner.go:195] Run: rm -f paused
	I0116 03:48:32.243812  507339 start.go:600] kubectl: 1.29.0, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0116 03:48:32.246056  507339 out.go:177] * Done! kubectl is now configured to use "no-preload-666547" cluster and "default" namespace by default
	I0116 03:48:31.940664  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:33.941163  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:31.868778  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:34.367292  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:33.825761  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:35.829060  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:36.440459  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:38.440778  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:36.367672  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:38.867024  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:40.867193  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:38.325077  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:40.326947  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:40.440990  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:42.942197  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:43.365931  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:45.367057  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:42.826200  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:44.827292  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:45.441601  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:47.443035  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:47.367959  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:49.867083  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:47.326224  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:49.326339  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:49.940592  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:51.942424  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:54.440478  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:51.868254  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:54.368867  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:51.825317  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:52.325756  507889 pod_ready.go:81] duration metric: took 4m0.008011182s waiting for pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace to be "Ready" ...
	E0116 03:48:52.325782  507889 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0116 03:48:52.325790  507889 pod_ready.go:38] duration metric: took 4m4.320002841s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:48:52.325804  507889 api_server.go:52] waiting for apiserver process to appear ...
	I0116 03:48:52.325855  507889 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0116 03:48:52.325905  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0116 03:48:52.394600  507889 cri.go:89] found id: "f9861ff0fbab72660648bfcb49b82810bada99f73a55cb0aa359ba114825b8f3"
	I0116 03:48:52.394624  507889 cri.go:89] found id: ""
	I0116 03:48:52.394632  507889 logs.go:284] 1 containers: [f9861ff0fbab72660648bfcb49b82810bada99f73a55cb0aa359ba114825b8f3]
	I0116 03:48:52.394716  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:48:52.400137  507889 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0116 03:48:52.400232  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0116 03:48:52.444453  507889 cri.go:89] found id: "e2758ac4468b10765b35629ffc54228655bc18f79a654cc03e91929ebdc3cf90"
	I0116 03:48:52.444485  507889 cri.go:89] found id: ""
	I0116 03:48:52.444495  507889 logs.go:284] 1 containers: [e2758ac4468b10765b35629ffc54228655bc18f79a654cc03e91929ebdc3cf90]
	I0116 03:48:52.444557  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:48:52.449850  507889 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0116 03:48:52.450002  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0116 03:48:52.499160  507889 cri.go:89] found id: "a07ae23e6e9e3c589264d0be7095896549a4b71fa2d0b06805a3b1b3bea16f6a"
	I0116 03:48:52.499204  507889 cri.go:89] found id: ""
	I0116 03:48:52.499216  507889 logs.go:284] 1 containers: [a07ae23e6e9e3c589264d0be7095896549a4b71fa2d0b06805a3b1b3bea16f6a]
	I0116 03:48:52.499286  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:48:52.504257  507889 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0116 03:48:52.504357  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0116 03:48:52.563747  507889 cri.go:89] found id: "e60387e0e2800d67c99eb3566f31ad675c0db6c22379fc28ee9a447af5f5023c"
	I0116 03:48:52.563782  507889 cri.go:89] found id: ""
	I0116 03:48:52.563790  507889 logs.go:284] 1 containers: [e60387e0e2800d67c99eb3566f31ad675c0db6c22379fc28ee9a447af5f5023c]
	I0116 03:48:52.563860  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:48:52.568676  507889 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0116 03:48:52.568771  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0116 03:48:52.617090  507889 cri.go:89] found id: "44f71a7069827d97c7326cbd36638ec36ff39cbbe0a1e4e61e7c010a38d2e123"
	I0116 03:48:52.617136  507889 cri.go:89] found id: ""
	I0116 03:48:52.617149  507889 logs.go:284] 1 containers: [44f71a7069827d97c7326cbd36638ec36ff39cbbe0a1e4e61e7c010a38d2e123]
	I0116 03:48:52.617222  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:48:52.622121  507889 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0116 03:48:52.622224  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0116 03:48:52.685004  507889 cri.go:89] found id: "1438a3832328a03b94c3d408a2e332c0fb0adab915a9d4f26994e84c0509fac9"
	I0116 03:48:52.685033  507889 cri.go:89] found id: ""
	I0116 03:48:52.685043  507889 logs.go:284] 1 containers: [1438a3832328a03b94c3d408a2e332c0fb0adab915a9d4f26994e84c0509fac9]
	I0116 03:48:52.685113  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:48:52.689837  507889 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0116 03:48:52.689913  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0116 03:48:52.730008  507889 cri.go:89] found id: ""
	I0116 03:48:52.730034  507889 logs.go:284] 0 containers: []
	W0116 03:48:52.730044  507889 logs.go:286] No container was found matching "kindnet"
	I0116 03:48:52.730051  507889 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0116 03:48:52.730120  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0116 03:48:52.780523  507889 cri.go:89] found id: "33ba3a03d878abc86a194defd715bb61a0066e49063e3823002bd3ec421da138"
	I0116 03:48:52.780554  507889 cri.go:89] found id: "a4b27881ef90cea325e8f306fac88a76052eec15a693c291288664b8f4ebcc2a"
	I0116 03:48:52.780562  507889 cri.go:89] found id: ""
	I0116 03:48:52.780571  507889 logs.go:284] 2 containers: [33ba3a03d878abc86a194defd715bb61a0066e49063e3823002bd3ec421da138 a4b27881ef90cea325e8f306fac88a76052eec15a693c291288664b8f4ebcc2a]
	I0116 03:48:52.780641  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:48:52.787305  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:48:52.791352  507889 logs.go:123] Gathering logs for kubelet ...
	I0116 03:48:52.791383  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0116 03:48:52.859099  507889 logs.go:123] Gathering logs for etcd [e2758ac4468b10765b35629ffc54228655bc18f79a654cc03e91929ebdc3cf90] ...
	I0116 03:48:52.859152  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2758ac4468b10765b35629ffc54228655bc18f79a654cc03e91929ebdc3cf90"
	I0116 03:48:52.912806  507889 logs.go:123] Gathering logs for coredns [a07ae23e6e9e3c589264d0be7095896549a4b71fa2d0b06805a3b1b3bea16f6a] ...
	I0116 03:48:52.912852  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a07ae23e6e9e3c589264d0be7095896549a4b71fa2d0b06805a3b1b3bea16f6a"
	I0116 03:48:52.960880  507889 logs.go:123] Gathering logs for kube-controller-manager [1438a3832328a03b94c3d408a2e332c0fb0adab915a9d4f26994e84c0509fac9] ...
	I0116 03:48:52.960919  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1438a3832328a03b94c3d408a2e332c0fb0adab915a9d4f26994e84c0509fac9"
	I0116 03:48:53.023064  507889 logs.go:123] Gathering logs for CRI-O ...
	I0116 03:48:53.023110  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0116 03:48:53.524890  507889 logs.go:123] Gathering logs for kube-apiserver [f9861ff0fbab72660648bfcb49b82810bada99f73a55cb0aa359ba114825b8f3] ...
	I0116 03:48:53.524934  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9861ff0fbab72660648bfcb49b82810bada99f73a55cb0aa359ba114825b8f3"
	I0116 03:48:53.587550  507889 logs.go:123] Gathering logs for kube-scheduler [e60387e0e2800d67c99eb3566f31ad675c0db6c22379fc28ee9a447af5f5023c] ...
	I0116 03:48:53.587594  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e60387e0e2800d67c99eb3566f31ad675c0db6c22379fc28ee9a447af5f5023c"
	I0116 03:48:53.627986  507889 logs.go:123] Gathering logs for kube-proxy [44f71a7069827d97c7326cbd36638ec36ff39cbbe0a1e4e61e7c010a38d2e123] ...
	I0116 03:48:53.628029  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44f71a7069827d97c7326cbd36638ec36ff39cbbe0a1e4e61e7c010a38d2e123"
	I0116 03:48:53.671704  507889 logs.go:123] Gathering logs for dmesg ...
	I0116 03:48:53.671739  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0116 03:48:53.686333  507889 logs.go:123] Gathering logs for describe nodes ...
	I0116 03:48:53.686370  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0116 03:48:53.855391  507889 logs.go:123] Gathering logs for storage-provisioner [33ba3a03d878abc86a194defd715bb61a0066e49063e3823002bd3ec421da138] ...
	I0116 03:48:53.855435  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33ba3a03d878abc86a194defd715bb61a0066e49063e3823002bd3ec421da138"
	I0116 03:48:53.906028  507889 logs.go:123] Gathering logs for storage-provisioner [a4b27881ef90cea325e8f306fac88a76052eec15a693c291288664b8f4ebcc2a] ...
	I0116 03:48:53.906064  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4b27881ef90cea325e8f306fac88a76052eec15a693c291288664b8f4ebcc2a"
	I0116 03:48:53.945386  507889 logs.go:123] Gathering logs for container status ...
	I0116 03:48:53.945419  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0116 03:48:56.498685  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:48:56.516768  507889 api_server.go:72] duration metric: took 4m13.505914609s to wait for apiserver process to appear ...
	I0116 03:48:56.516797  507889 api_server.go:88] waiting for apiserver healthz status ...
	I0116 03:48:56.516836  507889 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0116 03:48:56.516907  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0116 03:48:56.563236  507889 cri.go:89] found id: "f9861ff0fbab72660648bfcb49b82810bada99f73a55cb0aa359ba114825b8f3"
	I0116 03:48:56.563272  507889 cri.go:89] found id: ""
	I0116 03:48:56.563283  507889 logs.go:284] 1 containers: [f9861ff0fbab72660648bfcb49b82810bada99f73a55cb0aa359ba114825b8f3]
	I0116 03:48:56.563356  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:48:56.568012  507889 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0116 03:48:56.568188  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0116 03:48:56.443226  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:58.940353  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:56.868597  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:59.366906  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:56.613095  507889 cri.go:89] found id: "e2758ac4468b10765b35629ffc54228655bc18f79a654cc03e91929ebdc3cf90"
	I0116 03:48:56.613120  507889 cri.go:89] found id: ""
	I0116 03:48:56.613129  507889 logs.go:284] 1 containers: [e2758ac4468b10765b35629ffc54228655bc18f79a654cc03e91929ebdc3cf90]
	I0116 03:48:56.613190  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:48:56.618736  507889 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0116 03:48:56.618827  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0116 03:48:56.672773  507889 cri.go:89] found id: "a07ae23e6e9e3c589264d0be7095896549a4b71fa2d0b06805a3b1b3bea16f6a"
	I0116 03:48:56.672796  507889 cri.go:89] found id: ""
	I0116 03:48:56.672805  507889 logs.go:284] 1 containers: [a07ae23e6e9e3c589264d0be7095896549a4b71fa2d0b06805a3b1b3bea16f6a]
	I0116 03:48:56.672855  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:48:56.679218  507889 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0116 03:48:56.679293  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0116 03:48:56.724517  507889 cri.go:89] found id: "e60387e0e2800d67c99eb3566f31ad675c0db6c22379fc28ee9a447af5f5023c"
	I0116 03:48:56.724547  507889 cri.go:89] found id: ""
	I0116 03:48:56.724555  507889 logs.go:284] 1 containers: [e60387e0e2800d67c99eb3566f31ad675c0db6c22379fc28ee9a447af5f5023c]
	I0116 03:48:56.724622  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:48:56.730061  507889 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0116 03:48:56.730146  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0116 03:48:56.775380  507889 cri.go:89] found id: "44f71a7069827d97c7326cbd36638ec36ff39cbbe0a1e4e61e7c010a38d2e123"
	I0116 03:48:56.775413  507889 cri.go:89] found id: ""
	I0116 03:48:56.775423  507889 logs.go:284] 1 containers: [44f71a7069827d97c7326cbd36638ec36ff39cbbe0a1e4e61e7c010a38d2e123]
	I0116 03:48:56.775494  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:48:56.781085  507889 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0116 03:48:56.781183  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0116 03:48:56.830030  507889 cri.go:89] found id: "1438a3832328a03b94c3d408a2e332c0fb0adab915a9d4f26994e84c0509fac9"
	I0116 03:48:56.830067  507889 cri.go:89] found id: ""
	I0116 03:48:56.830076  507889 logs.go:284] 1 containers: [1438a3832328a03b94c3d408a2e332c0fb0adab915a9d4f26994e84c0509fac9]
	I0116 03:48:56.830163  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:48:56.834956  507889 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0116 03:48:56.835035  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0116 03:48:56.882972  507889 cri.go:89] found id: ""
	I0116 03:48:56.883001  507889 logs.go:284] 0 containers: []
	W0116 03:48:56.883013  507889 logs.go:286] No container was found matching "kindnet"
	I0116 03:48:56.883022  507889 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0116 03:48:56.883095  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0116 03:48:56.925520  507889 cri.go:89] found id: "33ba3a03d878abc86a194defd715bb61a0066e49063e3823002bd3ec421da138"
	I0116 03:48:56.925553  507889 cri.go:89] found id: "a4b27881ef90cea325e8f306fac88a76052eec15a693c291288664b8f4ebcc2a"
	I0116 03:48:56.925560  507889 cri.go:89] found id: ""
	I0116 03:48:56.925574  507889 logs.go:284] 2 containers: [33ba3a03d878abc86a194defd715bb61a0066e49063e3823002bd3ec421da138 a4b27881ef90cea325e8f306fac88a76052eec15a693c291288664b8f4ebcc2a]
	I0116 03:48:56.925656  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:48:56.931331  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:48:56.936492  507889 logs.go:123] Gathering logs for storage-provisioner [a4b27881ef90cea325e8f306fac88a76052eec15a693c291288664b8f4ebcc2a] ...
	I0116 03:48:56.936527  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4b27881ef90cea325e8f306fac88a76052eec15a693c291288664b8f4ebcc2a"
	I0116 03:48:56.981819  507889 logs.go:123] Gathering logs for kubelet ...
	I0116 03:48:56.981851  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0116 03:48:57.045678  507889 logs.go:123] Gathering logs for dmesg ...
	I0116 03:48:57.045723  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0116 03:48:57.060832  507889 logs.go:123] Gathering logs for etcd [e2758ac4468b10765b35629ffc54228655bc18f79a654cc03e91929ebdc3cf90] ...
	I0116 03:48:57.060872  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2758ac4468b10765b35629ffc54228655bc18f79a654cc03e91929ebdc3cf90"
	I0116 03:48:57.123644  507889 logs.go:123] Gathering logs for kube-scheduler [e60387e0e2800d67c99eb3566f31ad675c0db6c22379fc28ee9a447af5f5023c] ...
	I0116 03:48:57.123695  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e60387e0e2800d67c99eb3566f31ad675c0db6c22379fc28ee9a447af5f5023c"
	I0116 03:48:57.170173  507889 logs.go:123] Gathering logs for kube-proxy [44f71a7069827d97c7326cbd36638ec36ff39cbbe0a1e4e61e7c010a38d2e123] ...
	I0116 03:48:57.170216  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44f71a7069827d97c7326cbd36638ec36ff39cbbe0a1e4e61e7c010a38d2e123"
	I0116 03:48:57.215434  507889 logs.go:123] Gathering logs for describe nodes ...
	I0116 03:48:57.215470  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0116 03:48:57.370036  507889 logs.go:123] Gathering logs for kube-controller-manager [1438a3832328a03b94c3d408a2e332c0fb0adab915a9d4f26994e84c0509fac9] ...
	I0116 03:48:57.370081  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1438a3832328a03b94c3d408a2e332c0fb0adab915a9d4f26994e84c0509fac9"
	I0116 03:48:57.432988  507889 logs.go:123] Gathering logs for container status ...
	I0116 03:48:57.433048  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0116 03:48:57.485239  507889 logs.go:123] Gathering logs for kube-apiserver [f9861ff0fbab72660648bfcb49b82810bada99f73a55cb0aa359ba114825b8f3] ...
	I0116 03:48:57.485284  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9861ff0fbab72660648bfcb49b82810bada99f73a55cb0aa359ba114825b8f3"
	I0116 03:48:57.547192  507889 logs.go:123] Gathering logs for coredns [a07ae23e6e9e3c589264d0be7095896549a4b71fa2d0b06805a3b1b3bea16f6a] ...
	I0116 03:48:57.547237  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a07ae23e6e9e3c589264d0be7095896549a4b71fa2d0b06805a3b1b3bea16f6a"
	I0116 03:48:57.598025  507889 logs.go:123] Gathering logs for storage-provisioner [33ba3a03d878abc86a194defd715bb61a0066e49063e3823002bd3ec421da138] ...
	I0116 03:48:57.598085  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33ba3a03d878abc86a194defd715bb61a0066e49063e3823002bd3ec421da138"
	I0116 03:48:57.644234  507889 logs.go:123] Gathering logs for CRI-O ...
	I0116 03:48:57.644271  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0116 03:49:00.562219  507889 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8444/healthz ...
	I0116 03:49:00.568196  507889 api_server.go:279] https://192.168.50.236:8444/healthz returned 200:
	ok
	I0116 03:49:00.571612  507889 api_server.go:141] control plane version: v1.28.4
	I0116 03:49:00.571655  507889 api_server.go:131] duration metric: took 4.0548511s to wait for apiserver health ...
	I0116 03:49:00.571668  507889 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 03:49:00.571701  507889 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0116 03:49:00.571774  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0116 03:49:00.623308  507889 cri.go:89] found id: "f9861ff0fbab72660648bfcb49b82810bada99f73a55cb0aa359ba114825b8f3"
	I0116 03:49:00.623344  507889 cri.go:89] found id: ""
	I0116 03:49:00.623355  507889 logs.go:284] 1 containers: [f9861ff0fbab72660648bfcb49b82810bada99f73a55cb0aa359ba114825b8f3]
	I0116 03:49:00.623418  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:49:00.630287  507889 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0116 03:49:00.630381  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0116 03:49:00.673225  507889 cri.go:89] found id: "e2758ac4468b10765b35629ffc54228655bc18f79a654cc03e91929ebdc3cf90"
	I0116 03:49:00.673265  507889 cri.go:89] found id: ""
	I0116 03:49:00.673276  507889 logs.go:284] 1 containers: [e2758ac4468b10765b35629ffc54228655bc18f79a654cc03e91929ebdc3cf90]
	I0116 03:49:00.673334  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:49:00.678677  507889 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0116 03:49:00.678768  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0116 03:49:00.723055  507889 cri.go:89] found id: "a07ae23e6e9e3c589264d0be7095896549a4b71fa2d0b06805a3b1b3bea16f6a"
	I0116 03:49:00.723081  507889 cri.go:89] found id: ""
	I0116 03:49:00.723089  507889 logs.go:284] 1 containers: [a07ae23e6e9e3c589264d0be7095896549a4b71fa2d0b06805a3b1b3bea16f6a]
	I0116 03:49:00.723148  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:49:00.727931  507889 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0116 03:49:00.728053  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0116 03:49:00.777602  507889 cri.go:89] found id: "e60387e0e2800d67c99eb3566f31ad675c0db6c22379fc28ee9a447af5f5023c"
	I0116 03:49:00.777639  507889 cri.go:89] found id: ""
	I0116 03:49:00.777651  507889 logs.go:284] 1 containers: [e60387e0e2800d67c99eb3566f31ad675c0db6c22379fc28ee9a447af5f5023c]
	I0116 03:49:00.777723  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:49:00.787121  507889 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0116 03:49:00.787206  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0116 03:49:00.835268  507889 cri.go:89] found id: "44f71a7069827d97c7326cbd36638ec36ff39cbbe0a1e4e61e7c010a38d2e123"
	I0116 03:49:00.835300  507889 cri.go:89] found id: ""
	I0116 03:49:00.835310  507889 logs.go:284] 1 containers: [44f71a7069827d97c7326cbd36638ec36ff39cbbe0a1e4e61e7c010a38d2e123]
	I0116 03:49:00.835378  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:49:00.842204  507889 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0116 03:49:00.842299  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0116 03:49:00.889511  507889 cri.go:89] found id: "1438a3832328a03b94c3d408a2e332c0fb0adab915a9d4f26994e84c0509fac9"
	I0116 03:49:00.889541  507889 cri.go:89] found id: ""
	I0116 03:49:00.889551  507889 logs.go:284] 1 containers: [1438a3832328a03b94c3d408a2e332c0fb0adab915a9d4f26994e84c0509fac9]
	I0116 03:49:00.889620  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:49:00.894964  507889 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0116 03:49:00.895059  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0116 03:49:00.937187  507889 cri.go:89] found id: ""
	I0116 03:49:00.937221  507889 logs.go:284] 0 containers: []
	W0116 03:49:00.937237  507889 logs.go:286] No container was found matching "kindnet"
	I0116 03:49:00.937246  507889 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0116 03:49:00.937313  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0116 03:49:00.977711  507889 cri.go:89] found id: "33ba3a03d878abc86a194defd715bb61a0066e49063e3823002bd3ec421da138"
	I0116 03:49:00.977740  507889 cri.go:89] found id: "a4b27881ef90cea325e8f306fac88a76052eec15a693c291288664b8f4ebcc2a"
	I0116 03:49:00.977748  507889 cri.go:89] found id: ""
	I0116 03:49:00.977756  507889 logs.go:284] 2 containers: [33ba3a03d878abc86a194defd715bb61a0066e49063e3823002bd3ec421da138 a4b27881ef90cea325e8f306fac88a76052eec15a693c291288664b8f4ebcc2a]
	I0116 03:49:00.977834  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:49:00.982886  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:49:00.988008  507889 logs.go:123] Gathering logs for describe nodes ...
	I0116 03:49:00.988061  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0116 03:49:01.115755  507889 logs.go:123] Gathering logs for dmesg ...
	I0116 03:49:01.115791  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0116 03:49:01.131706  507889 logs.go:123] Gathering logs for etcd [e2758ac4468b10765b35629ffc54228655bc18f79a654cc03e91929ebdc3cf90] ...
	I0116 03:49:01.131748  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2758ac4468b10765b35629ffc54228655bc18f79a654cc03e91929ebdc3cf90"
	I0116 03:49:01.186279  507889 logs.go:123] Gathering logs for kube-scheduler [e60387e0e2800d67c99eb3566f31ad675c0db6c22379fc28ee9a447af5f5023c] ...
	I0116 03:49:01.186324  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e60387e0e2800d67c99eb3566f31ad675c0db6c22379fc28ee9a447af5f5023c"
	I0116 03:49:01.231057  507889 logs.go:123] Gathering logs for kube-controller-manager [1438a3832328a03b94c3d408a2e332c0fb0adab915a9d4f26994e84c0509fac9] ...
	I0116 03:49:01.231100  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1438a3832328a03b94c3d408a2e332c0fb0adab915a9d4f26994e84c0509fac9"
	I0116 03:49:01.307541  507889 logs.go:123] Gathering logs for storage-provisioner [33ba3a03d878abc86a194defd715bb61a0066e49063e3823002bd3ec421da138] ...
	I0116 03:49:01.307586  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33ba3a03d878abc86a194defd715bb61a0066e49063e3823002bd3ec421da138"
	I0116 03:49:01.356517  507889 logs.go:123] Gathering logs for storage-provisioner [a4b27881ef90cea325e8f306fac88a76052eec15a693c291288664b8f4ebcc2a] ...
	I0116 03:49:01.356563  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4b27881ef90cea325e8f306fac88a76052eec15a693c291288664b8f4ebcc2a"
	I0116 03:49:01.409790  507889 logs.go:123] Gathering logs for kube-apiserver [f9861ff0fbab72660648bfcb49b82810bada99f73a55cb0aa359ba114825b8f3] ...
	I0116 03:49:01.409846  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9861ff0fbab72660648bfcb49b82810bada99f73a55cb0aa359ba114825b8f3"
	I0116 03:49:01.462029  507889 logs.go:123] Gathering logs for CRI-O ...
	I0116 03:49:01.462077  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0116 03:49:00.942100  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:49:02.942316  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:49:01.838933  507889 logs.go:123] Gathering logs for coredns [a07ae23e6e9e3c589264d0be7095896549a4b71fa2d0b06805a3b1b3bea16f6a] ...
	I0116 03:49:01.838999  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a07ae23e6e9e3c589264d0be7095896549a4b71fa2d0b06805a3b1b3bea16f6a"
	I0116 03:49:01.884022  507889 logs.go:123] Gathering logs for kube-proxy [44f71a7069827d97c7326cbd36638ec36ff39cbbe0a1e4e61e7c010a38d2e123] ...
	I0116 03:49:01.884075  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44f71a7069827d97c7326cbd36638ec36ff39cbbe0a1e4e61e7c010a38d2e123"
	I0116 03:49:01.930032  507889 logs.go:123] Gathering logs for container status ...
	I0116 03:49:01.930090  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0116 03:49:01.998827  507889 logs.go:123] Gathering logs for kubelet ...
	I0116 03:49:01.998863  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0116 03:49:04.573529  507889 system_pods.go:59] 8 kube-system pods found
	I0116 03:49:04.573571  507889 system_pods.go:61] "coredns-5dd5756b68-pmx8n" [9d3c9941-8938-4d4a-b6d8-9b542ca6f1ca] Running
	I0116 03:49:04.573579  507889 system_pods.go:61] "etcd-default-k8s-diff-port-434445" [a2327159-d034-4f62-a8f5-14062a41c75d] Running
	I0116 03:49:04.573587  507889 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-434445" [75c5fc5e-86b1-42cf-bb5c-8a1f8b4e13db] Running
	I0116 03:49:04.573594  507889 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-434445" [2568dd4f-124d-4c4f-8651-3b89b2b54983] Running
	I0116 03:49:04.573600  507889 system_pods.go:61] "kube-proxy-dcbqg" [eba1f9bf-6aa7-40cd-b57c-745c2d0cc414] Running
	I0116 03:49:04.573607  507889 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-434445" [479952a1-8c3d-49b4-bd22-fef988e6b83d] Running
	I0116 03:49:04.573617  507889 system_pods.go:61] "metrics-server-57f55c9bc5-894n2" [46e4892a-d026-4a9d-88bc-128e92848808] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:49:04.573626  507889 system_pods.go:61] "storage-provisioner" [16fd4585-3d75-40c3-a28d-4134375f4e3d] Running
	I0116 03:49:04.573638  507889 system_pods.go:74] duration metric: took 4.001961367s to wait for pod list to return data ...
	I0116 03:49:04.573657  507889 default_sa.go:34] waiting for default service account to be created ...
	I0116 03:49:04.577012  507889 default_sa.go:45] found service account: "default"
	I0116 03:49:04.577041  507889 default_sa.go:55] duration metric: took 3.376395ms for default service account to be created ...
	I0116 03:49:04.577051  507889 system_pods.go:116] waiting for k8s-apps to be running ...
	I0116 03:49:04.583833  507889 system_pods.go:86] 8 kube-system pods found
	I0116 03:49:04.583880  507889 system_pods.go:89] "coredns-5dd5756b68-pmx8n" [9d3c9941-8938-4d4a-b6d8-9b542ca6f1ca] Running
	I0116 03:49:04.583890  507889 system_pods.go:89] "etcd-default-k8s-diff-port-434445" [a2327159-d034-4f62-a8f5-14062a41c75d] Running
	I0116 03:49:04.583898  507889 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-434445" [75c5fc5e-86b1-42cf-bb5c-8a1f8b4e13db] Running
	I0116 03:49:04.583905  507889 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-434445" [2568dd4f-124d-4c4f-8651-3b89b2b54983] Running
	I0116 03:49:04.583911  507889 system_pods.go:89] "kube-proxy-dcbqg" [eba1f9bf-6aa7-40cd-b57c-745c2d0cc414] Running
	I0116 03:49:04.583918  507889 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-434445" [479952a1-8c3d-49b4-bd22-fef988e6b83d] Running
	I0116 03:49:04.583928  507889 system_pods.go:89] "metrics-server-57f55c9bc5-894n2" [46e4892a-d026-4a9d-88bc-128e92848808] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:49:04.583936  507889 system_pods.go:89] "storage-provisioner" [16fd4585-3d75-40c3-a28d-4134375f4e3d] Running
	I0116 03:49:04.583950  507889 system_pods.go:126] duration metric: took 6.89136ms to wait for k8s-apps to be running ...
	I0116 03:49:04.583964  507889 system_svc.go:44] waiting for kubelet service to be running ....
	I0116 03:49:04.584016  507889 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:49:04.600209  507889 system_svc.go:56] duration metric: took 16.229333ms WaitForService to wait for kubelet.
	I0116 03:49:04.600252  507889 kubeadm.go:581] duration metric: took 4m21.589410808s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0116 03:49:04.600285  507889 node_conditions.go:102] verifying NodePressure condition ...
	I0116 03:49:04.603774  507889 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 03:49:04.603803  507889 node_conditions.go:123] node cpu capacity is 2
	I0116 03:49:04.603815  507889 node_conditions.go:105] duration metric: took 3.52526ms to run NodePressure ...
	I0116 03:49:04.603829  507889 start.go:228] waiting for startup goroutines ...
	I0116 03:49:04.603836  507889 start.go:233] waiting for cluster config update ...
	I0116 03:49:04.603849  507889 start.go:242] writing updated cluster config ...
	I0116 03:49:04.604185  507889 ssh_runner.go:195] Run: rm -f paused
	I0116 03:49:04.658922  507889 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0116 03:49:04.661265  507889 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-434445" cluster and "default" namespace by default
	I0116 03:49:01.367935  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:49:03.867391  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:49:05.867519  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:49:05.440602  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:49:07.441041  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:49:08.434235  507510 pod_ready.go:81] duration metric: took 4m0.001038173s waiting for pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace to be "Ready" ...
	E0116 03:49:08.434278  507510 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace to be "Ready" (will not retry!)
	I0116 03:49:08.434304  507510 pod_ready.go:38] duration metric: took 4m1.20014772s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:49:08.434338  507510 kubeadm.go:640] restartCluster took 5m11.767236835s
	W0116 03:49:08.434423  507510 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0116 03:49:08.434463  507510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0116 03:49:07.868307  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:49:10.367347  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:49:15.339252  507510 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (6.904753674s)
	I0116 03:49:15.339341  507510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:49:15.355684  507510 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 03:49:15.371377  507510 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 03:49:15.393609  507510 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 03:49:15.393674  507510 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0116 03:49:15.478382  507510 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0116 03:49:15.478464  507510 kubeadm.go:322] [preflight] Running pre-flight checks
	I0116 03:49:15.663487  507510 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0116 03:49:15.663663  507510 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0116 03:49:15.663803  507510 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0116 03:49:15.940677  507510 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0116 03:49:15.940857  507510 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0116 03:49:15.949553  507510 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0116 03:49:16.075111  507510 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0116 03:49:12.867512  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:49:13.859320  507257 pod_ready.go:81] duration metric: took 4m0.000451049s waiting for pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace to be "Ready" ...
	E0116 03:49:13.859353  507257 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace to be "Ready" (will not retry!)
	I0116 03:49:13.859375  507257 pod_ready.go:38] duration metric: took 4m12.063407854s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:49:13.859418  507257 kubeadm.go:640] restartCluster took 4m32.047022773s
	W0116 03:49:13.859484  507257 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0116 03:49:13.859513  507257 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0116 03:49:16.077099  507510 out.go:204]   - Generating certificates and keys ...
	I0116 03:49:16.077224  507510 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0116 03:49:16.077305  507510 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0116 03:49:16.077410  507510 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0116 03:49:16.077504  507510 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0116 03:49:16.077617  507510 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0116 03:49:16.077745  507510 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0116 03:49:16.078085  507510 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0116 03:49:16.078639  507510 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0116 03:49:16.079112  507510 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0116 03:49:16.079719  507510 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0116 03:49:16.079935  507510 kubeadm.go:322] [certs] Using the existing "sa" key
	I0116 03:49:16.080015  507510 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0116 03:49:16.246902  507510 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0116 03:49:16.332722  507510 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0116 03:49:16.534277  507510 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0116 03:49:16.908642  507510 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0116 03:49:16.909711  507510 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0116 03:49:16.911960  507510 out.go:204]   - Booting up control plane ...
	I0116 03:49:16.912103  507510 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0116 03:49:16.923200  507510 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0116 03:49:16.924797  507510 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0116 03:49:16.926738  507510 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0116 03:49:16.937544  507510 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0116 03:49:27.943253  507510 kubeadm.go:322] [apiclient] All control plane components are healthy after 11.005405 seconds
	I0116 03:49:27.943474  507510 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0116 03:49:27.970644  507510 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I0116 03:49:28.500660  507510 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0116 03:49:28.500847  507510 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-696770 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0116 03:49:29.015036  507510 kubeadm.go:322] [bootstrap-token] Using token: nr2yh0.22ni19zxk2s7hw9l
	I0116 03:49:28.504409  507257 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.644866985s)
	I0116 03:49:28.504498  507257 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:49:28.519788  507257 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 03:49:28.531667  507257 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 03:49:28.543058  507257 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 03:49:28.543113  507257 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0116 03:49:28.603369  507257 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0116 03:49:28.603521  507257 kubeadm.go:322] [preflight] Running pre-flight checks
	I0116 03:49:28.784258  507257 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0116 03:49:28.784384  507257 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0116 03:49:28.784491  507257 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0116 03:49:29.068390  507257 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0116 03:49:29.017077  507510 out.go:204]   - Configuring RBAC rules ...
	I0116 03:49:29.017276  507510 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0116 03:49:29.044200  507510 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0116 03:49:29.049807  507510 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0116 03:49:29.054441  507510 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0116 03:49:29.057939  507510 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0116 03:49:29.142810  507510 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0116 03:49:29.439580  507510 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0116 03:49:29.441665  507510 kubeadm.go:322] 
	I0116 03:49:29.441736  507510 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0116 03:49:29.441741  507510 kubeadm.go:322] 
	I0116 03:49:29.441863  507510 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0116 03:49:29.441898  507510 kubeadm.go:322] 
	I0116 03:49:29.441932  507510 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0116 03:49:29.441999  507510 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0116 03:49:29.442057  507510 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0116 03:49:29.442099  507510 kubeadm.go:322] 
	I0116 03:49:29.442200  507510 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0116 03:49:29.442306  507510 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0116 03:49:29.442414  507510 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0116 03:49:29.442429  507510 kubeadm.go:322] 
	I0116 03:49:29.442566  507510 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I0116 03:49:29.442689  507510 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0116 03:49:29.442701  507510 kubeadm.go:322] 
	I0116 03:49:29.442813  507510 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token nr2yh0.22ni19zxk2s7hw9l \
	I0116 03:49:29.442967  507510 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:ab75761e1119a1709a216b682cd7b1dcce0641115df5f236b54b16e4f66aa044 \
	I0116 03:49:29.443008  507510 kubeadm.go:322]     --control-plane 	  
	I0116 03:49:29.443024  507510 kubeadm.go:322] 
	I0116 03:49:29.443147  507510 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0116 03:49:29.443159  507510 kubeadm.go:322] 
	I0116 03:49:29.443285  507510 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token nr2yh0.22ni19zxk2s7hw9l \
	I0116 03:49:29.443414  507510 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:ab75761e1119a1709a216b682cd7b1dcce0641115df5f236b54b16e4f66aa044 
	I0116 03:49:29.444142  507510 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0116 03:49:29.444278  507510 cni.go:84] Creating CNI manager for ""
	I0116 03:49:29.444302  507510 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:49:29.446569  507510 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 03:49:29.447957  507510 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 03:49:29.457418  507510 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 03:49:29.478015  507510 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0116 03:49:29.478130  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:29.478135  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=6e8fa5f64d0e7272be43ff25ed3826261f0a2578 minikube.k8s.io/name=old-k8s-version-696770 minikube.k8s.io/updated_at=2024_01_16T03_49_29_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:29.070681  507257 out.go:204]   - Generating certificates and keys ...
	I0116 03:49:29.070805  507257 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0116 03:49:29.070882  507257 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0116 03:49:29.071007  507257 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0116 03:49:29.071108  507257 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0116 03:49:29.071243  507257 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0116 03:49:29.071320  507257 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0116 03:49:29.071422  507257 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0116 03:49:29.071497  507257 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0116 03:49:29.071928  507257 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0116 03:49:29.074454  507257 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0116 03:49:29.076202  507257 kubeadm.go:322] [certs] Using the existing "sa" key
	I0116 03:49:29.076435  507257 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0116 03:49:29.360527  507257 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0116 03:49:29.779361  507257 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0116 03:49:29.976749  507257 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0116 03:49:30.075605  507257 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0116 03:49:30.076375  507257 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0116 03:49:30.079235  507257 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0116 03:49:30.081497  507257 out.go:204]   - Booting up control plane ...
	I0116 03:49:30.081645  507257 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0116 03:49:30.082340  507257 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0116 03:49:30.083349  507257 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0116 03:49:30.103660  507257 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0116 03:49:30.104863  507257 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0116 03:49:30.104924  507257 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0116 03:49:30.229980  507257 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0116 03:49:29.724417  507510 ops.go:34] apiserver oom_adj: -16
	I0116 03:49:29.724549  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:30.224988  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:30.725451  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:31.225287  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:31.724689  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:32.224984  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:32.724769  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:33.225547  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:33.724874  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:34.225301  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:34.725134  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:35.224977  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:35.724998  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:36.225495  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:36.725043  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:37.224700  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:37.725397  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:38.225311  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:38.725308  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:39.224885  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:38.732431  507257 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.502537 seconds
	I0116 03:49:38.732591  507257 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0116 03:49:38.766319  507257 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0116 03:49:39.312926  507257 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0116 03:49:39.313225  507257 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-615980 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0116 03:49:39.836927  507257 kubeadm.go:322] [bootstrap-token] Using token: 8bzdm1.4lwyoxck7xjn6vqr
	I0116 03:49:39.838931  507257 out.go:204]   - Configuring RBAC rules ...
	I0116 03:49:39.839093  507257 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0116 03:49:39.850909  507257 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0116 03:49:39.873417  507257 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0116 03:49:39.879093  507257 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0116 03:49:39.883914  507257 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0116 03:49:39.889130  507257 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0116 03:49:39.910444  507257 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0116 03:49:40.235572  507257 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0116 03:49:40.334951  507257 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0116 03:49:40.335000  507257 kubeadm.go:322] 
	I0116 03:49:40.335092  507257 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0116 03:49:40.335103  507257 kubeadm.go:322] 
	I0116 03:49:40.335212  507257 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0116 03:49:40.335222  507257 kubeadm.go:322] 
	I0116 03:49:40.335266  507257 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0116 03:49:40.335353  507257 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0116 03:49:40.335421  507257 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0116 03:49:40.335430  507257 kubeadm.go:322] 
	I0116 03:49:40.335504  507257 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0116 03:49:40.335513  507257 kubeadm.go:322] 
	I0116 03:49:40.335598  507257 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0116 03:49:40.335618  507257 kubeadm.go:322] 
	I0116 03:49:40.335690  507257 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0116 03:49:40.335793  507257 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0116 03:49:40.335891  507257 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0116 03:49:40.335904  507257 kubeadm.go:322] 
	I0116 03:49:40.336008  507257 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0116 03:49:40.336128  507257 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0116 03:49:40.336143  507257 kubeadm.go:322] 
	I0116 03:49:40.336262  507257 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 8bzdm1.4lwyoxck7xjn6vqr \
	I0116 03:49:40.336427  507257 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ab75761e1119a1709a216b682cd7b1dcce0641115df5f236b54b16e4f66aa044 \
	I0116 03:49:40.336456  507257 kubeadm.go:322] 	--control-plane 
	I0116 03:49:40.336463  507257 kubeadm.go:322] 
	I0116 03:49:40.336594  507257 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0116 03:49:40.336611  507257 kubeadm.go:322] 
	I0116 03:49:40.336744  507257 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 8bzdm1.4lwyoxck7xjn6vqr \
	I0116 03:49:40.336876  507257 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ab75761e1119a1709a216b682cd7b1dcce0641115df5f236b54b16e4f66aa044 
	I0116 03:49:40.337377  507257 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0116 03:49:40.337421  507257 cni.go:84] Creating CNI manager for ""
	I0116 03:49:40.337432  507257 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:49:40.340415  507257 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 03:49:40.341952  507257 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 03:49:40.376620  507257 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 03:49:40.459091  507257 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0116 03:49:40.459177  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:40.459233  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=6e8fa5f64d0e7272be43ff25ed3826261f0a2578 minikube.k8s.io/name=embed-certs-615980 minikube.k8s.io/updated_at=2024_01_16T03_49_40_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:40.524693  507257 ops.go:34] apiserver oom_adj: -16
	I0116 03:49:40.917890  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:39.725272  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:40.225380  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:40.725272  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:41.225258  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:41.725525  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:42.225270  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:42.725463  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:43.224674  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:43.724904  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:44.224946  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:44.725197  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:44.843354  507510 kubeadm.go:1088] duration metric: took 15.365308355s to wait for elevateKubeSystemPrivileges.
	I0116 03:49:44.843465  507510 kubeadm.go:406] StartCluster complete in 5m48.250275121s
	I0116 03:49:44.843545  507510 settings.go:142] acquiring lock: {Name:mk3ded99c06d285b174e76622bb0741c8dd2d8fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:49:44.843708  507510 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17965-468241/kubeconfig
	I0116 03:49:44.846444  507510 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-468241/kubeconfig: {Name:mkac3c63bbcba9b4fa1bea3480797757d703fb9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:49:44.846814  507510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0116 03:49:44.846959  507510 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0116 03:49:44.847043  507510 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-696770"
	I0116 03:49:44.847067  507510 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-696770"
	I0116 03:49:44.847065  507510 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-696770"
	W0116 03:49:44.847076  507510 addons.go:243] addon storage-provisioner should already be in state true
	I0116 03:49:44.847079  507510 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-696770"
	I0116 03:49:44.847099  507510 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-696770"
	I0116 03:49:44.847108  507510 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-696770"
	W0116 03:49:44.847130  507510 addons.go:243] addon metrics-server should already be in state true
	I0116 03:49:44.847152  507510 host.go:66] Checking if "old-k8s-version-696770" exists ...
	I0116 03:49:44.847087  507510 config.go:182] Loaded profile config "old-k8s-version-696770": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0116 03:49:44.847178  507510 host.go:66] Checking if "old-k8s-version-696770" exists ...
	I0116 03:49:44.847548  507510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:49:44.847568  507510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:49:44.847579  507510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:49:44.847594  507510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:49:44.847605  507510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:49:44.847632  507510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:49:44.865585  507510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36045
	I0116 03:49:44.865597  507510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45289
	I0116 03:49:44.865592  507510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39579
	I0116 03:49:44.866119  507510 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:49:44.866200  507510 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:49:44.866352  507510 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:49:44.867018  507510 main.go:141] libmachine: Using API Version  1
	I0116 03:49:44.867040  507510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:49:44.867043  507510 main.go:141] libmachine: Using API Version  1
	I0116 03:49:44.867051  507510 main.go:141] libmachine: Using API Version  1
	I0116 03:49:44.867071  507510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:49:44.867091  507510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:49:44.867481  507510 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:49:44.867557  507510 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:49:44.867711  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetState
	I0116 03:49:44.867929  507510 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:49:44.868184  507510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:49:44.868215  507510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:49:44.868486  507510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:49:44.868519  507510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:49:44.872747  507510 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-696770"
	W0116 03:49:44.872781  507510 addons.go:243] addon default-storageclass should already be in state true
	I0116 03:49:44.872816  507510 host.go:66] Checking if "old-k8s-version-696770" exists ...
	I0116 03:49:44.873264  507510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:49:44.873308  507510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:49:44.888049  507510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45943
	I0116 03:49:44.890481  507510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37259
	I0116 03:49:44.890990  507510 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:49:44.891285  507510 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:49:44.891567  507510 main.go:141] libmachine: Using API Version  1
	I0116 03:49:44.891582  507510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:49:44.891846  507510 main.go:141] libmachine: Using API Version  1
	I0116 03:49:44.891865  507510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:49:44.892307  507510 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:49:44.892510  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetState
	I0116 03:49:44.892575  507510 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:49:44.892760  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetState
	I0116 03:49:44.894812  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .DriverName
	I0116 03:49:44.895060  507510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37701
	I0116 03:49:44.896571  507510 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0116 03:49:44.895272  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .DriverName
	I0116 03:49:44.895678  507510 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:49:44.898051  507510 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0116 03:49:44.898074  507510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0116 03:49:44.899552  507510 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:49:44.897299  507510 main.go:141] libmachine: Using API Version  1
	I0116 03:49:44.898096  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHHostname
	I0116 03:49:44.901091  507510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:49:44.901216  507510 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 03:49:44.901234  507510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0116 03:49:44.901256  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHHostname
	I0116 03:49:44.902226  507510 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:49:44.902866  507510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:49:44.902908  507510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:49:44.905915  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:49:44.906022  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:49:44.906456  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:20:1a", ip: ""} in network mk-old-k8s-version-696770: {Iface:virbr3 ExpiryTime:2024-01-16 04:43:38 +0000 UTC Type:0 Mac:52:54:00:37:20:1a Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:old-k8s-version-696770 Clientid:01:52:54:00:37:20:1a}
	I0116 03:49:44.906482  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined IP address 192.168.61.167 and MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:49:44.906775  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHPort
	I0116 03:49:44.906851  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:20:1a", ip: ""} in network mk-old-k8s-version-696770: {Iface:virbr3 ExpiryTime:2024-01-16 04:43:38 +0000 UTC Type:0 Mac:52:54:00:37:20:1a Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:old-k8s-version-696770 Clientid:01:52:54:00:37:20:1a}
	I0116 03:49:44.906892  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHPort
	I0116 03:49:44.906941  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined IP address 192.168.61.167 and MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:49:44.907116  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHKeyPath
	I0116 03:49:44.907254  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHKeyPath
	I0116 03:49:44.907324  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHUsername
	I0116 03:49:44.907416  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHUsername
	I0116 03:49:44.907471  507510 sshutil.go:53] new ssh client: &{IP:192.168.61.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/old-k8s-version-696770/id_rsa Username:docker}
	I0116 03:49:44.908078  507510 sshutil.go:53] new ssh client: &{IP:192.168.61.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/old-k8s-version-696770/id_rsa Username:docker}
	I0116 03:49:44.925689  507510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38987
	I0116 03:49:44.926190  507510 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:49:44.926847  507510 main.go:141] libmachine: Using API Version  1
	I0116 03:49:44.926870  507510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:49:44.927322  507510 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:49:44.927545  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetState
	I0116 03:49:44.929553  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .DriverName
	I0116 03:49:44.930008  507510 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0116 03:49:44.930027  507510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0116 03:49:44.930049  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHHostname
	I0116 03:49:44.933353  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:49:44.933768  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:20:1a", ip: ""} in network mk-old-k8s-version-696770: {Iface:virbr3 ExpiryTime:2024-01-16 04:43:38 +0000 UTC Type:0 Mac:52:54:00:37:20:1a Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:old-k8s-version-696770 Clientid:01:52:54:00:37:20:1a}
	I0116 03:49:44.933799  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined IP address 192.168.61.167 and MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:49:44.933975  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHPort
	I0116 03:49:44.934184  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHKeyPath
	I0116 03:49:44.934277  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHUsername
	I0116 03:49:44.934374  507510 sshutil.go:53] new ssh client: &{IP:192.168.61.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/old-k8s-version-696770/id_rsa Username:docker}
	I0116 03:49:45.044743  507510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 03:49:45.073179  507510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0116 03:49:45.073426  507510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0116 03:49:45.095360  507510 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0116 03:49:45.095383  507510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0116 03:49:45.162632  507510 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0116 03:49:45.162661  507510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0116 03:49:45.252628  507510 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 03:49:45.252665  507510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0116 03:49:45.325535  507510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 03:49:45.533499  507510 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-696770" context rescaled to 1 replicas
	I0116 03:49:45.533553  507510 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.167 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 03:49:45.536655  507510 out.go:177] * Verifying Kubernetes components...
	I0116 03:49:41.418664  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:41.918459  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:42.418296  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:42.918119  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:43.418565  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:43.918746  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:44.418812  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:44.918603  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:45.418865  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:45.918104  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:45.538565  507510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:49:46.390448  507510 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.3456663s)
	I0116 03:49:46.390513  507510 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.31729292s)
	I0116 03:49:46.390536  507510 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.317072847s)
	I0116 03:49:46.390556  507510 main.go:141] libmachine: Making call to close driver server
	I0116 03:49:46.390520  507510 main.go:141] libmachine: Making call to close driver server
	I0116 03:49:46.390573  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .Close
	I0116 03:49:46.390595  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .Close
	I0116 03:49:46.390559  507510 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0116 03:49:46.391000  507510 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:49:46.391023  507510 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:49:46.391035  507510 main.go:141] libmachine: Making call to close driver server
	I0116 03:49:46.391040  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | Closing plugin on server side
	I0116 03:49:46.391006  507510 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:49:46.391059  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | Closing plugin on server side
	I0116 03:49:46.391062  507510 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:49:46.391044  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .Close
	I0116 03:49:46.391075  507510 main.go:141] libmachine: Making call to close driver server
	I0116 03:49:46.391083  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .Close
	I0116 03:49:46.391314  507510 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:49:46.391332  507510 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:49:46.391594  507510 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:49:46.391625  507510 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:49:46.465666  507510 main.go:141] libmachine: Making call to close driver server
	I0116 03:49:46.465688  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .Close
	I0116 03:49:46.466107  507510 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:49:46.466127  507510 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:49:46.597926  507510 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.05930194s)
	I0116 03:49:46.597988  507510 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-696770" to be "Ready" ...
	I0116 03:49:46.597925  507510 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.272324444s)
	I0116 03:49:46.598099  507510 main.go:141] libmachine: Making call to close driver server
	I0116 03:49:46.598123  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .Close
	I0116 03:49:46.598503  507510 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:49:46.598527  507510 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:49:46.598531  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | Closing plugin on server side
	I0116 03:49:46.598539  507510 main.go:141] libmachine: Making call to close driver server
	I0116 03:49:46.598549  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .Close
	I0116 03:49:46.598884  507510 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:49:46.598903  507510 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:49:46.598917  507510 addons.go:470] Verifying addon metrics-server=true in "old-k8s-version-696770"
	I0116 03:49:46.600845  507510 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0116 03:49:46.602484  507510 addons.go:505] enable addons completed in 1.755527621s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0116 03:49:46.612929  507510 node_ready.go:49] node "old-k8s-version-696770" has status "Ready":"True"
	I0116 03:49:46.612962  507510 node_ready.go:38] duration metric: took 14.959317ms waiting for node "old-k8s-version-696770" to be "Ready" ...
	I0116 03:49:46.612975  507510 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:49:46.616466  507510 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-h85tj" in "kube-system" namespace to be "Ready" ...
	I0116 03:49:48.628130  507510 pod_ready.go:102] pod "coredns-5644d7b6d9-h85tj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:49:46.418268  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:46.917976  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:47.418645  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:47.917927  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:48.417920  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:48.917939  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:49.418387  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:49.918203  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:50.417930  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:50.918518  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:51.418036  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:51.917981  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:52.418293  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:52.635961  507257 kubeadm.go:1088] duration metric: took 12.176857981s to wait for elevateKubeSystemPrivileges.
	I0116 03:49:52.636014  507257 kubeadm.go:406] StartCluster complete in 5m10.892359223s
	I0116 03:49:52.636054  507257 settings.go:142] acquiring lock: {Name:mk3ded99c06d285b174e76622bb0741c8dd2d8fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:49:52.636186  507257 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17965-468241/kubeconfig
	I0116 03:49:52.638885  507257 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-468241/kubeconfig: {Name:mkac3c63bbcba9b4fa1bea3480797757d703fb9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:49:52.639229  507257 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0116 03:49:52.639345  507257 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0116 03:49:52.639439  507257 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-615980"
	I0116 03:49:52.639461  507257 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-615980"
	I0116 03:49:52.639458  507257 addons.go:69] Setting default-storageclass=true in profile "embed-certs-615980"
	W0116 03:49:52.639469  507257 addons.go:243] addon storage-provisioner should already be in state true
	I0116 03:49:52.639482  507257 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-615980"
	I0116 03:49:52.639504  507257 config.go:182] Loaded profile config "embed-certs-615980": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 03:49:52.639541  507257 host.go:66] Checking if "embed-certs-615980" exists ...
	I0116 03:49:52.639562  507257 addons.go:69] Setting metrics-server=true in profile "embed-certs-615980"
	I0116 03:49:52.639579  507257 addons.go:234] Setting addon metrics-server=true in "embed-certs-615980"
	W0116 03:49:52.639591  507257 addons.go:243] addon metrics-server should already be in state true
	I0116 03:49:52.639639  507257 host.go:66] Checking if "embed-certs-615980" exists ...
	I0116 03:49:52.639965  507257 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:49:52.639984  507257 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:49:52.640007  507257 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:49:52.640023  507257 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:49:52.640084  507257 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:49:52.640118  507257 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:49:52.660468  507257 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36595
	I0116 03:49:52.660653  507257 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44367
	I0116 03:49:52.661058  507257 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:49:52.661184  507257 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:49:52.661685  507257 main.go:141] libmachine: Using API Version  1
	I0116 03:49:52.661709  507257 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:49:52.661768  507257 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40717
	I0116 03:49:52.661855  507257 main.go:141] libmachine: Using API Version  1
	I0116 03:49:52.661871  507257 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:49:52.662141  507257 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:49:52.662207  507257 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:49:52.662425  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetState
	I0116 03:49:52.662480  507257 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:49:52.662858  507257 main.go:141] libmachine: Using API Version  1
	I0116 03:49:52.662875  507257 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:49:52.663301  507257 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:49:52.663337  507257 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:49:52.663413  507257 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:49:52.663956  507257 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:49:52.663985  507257 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:49:52.666163  507257 addons.go:234] Setting addon default-storageclass=true in "embed-certs-615980"
	W0116 03:49:52.666190  507257 addons.go:243] addon default-storageclass should already be in state true
	I0116 03:49:52.666224  507257 host.go:66] Checking if "embed-certs-615980" exists ...
	I0116 03:49:52.666630  507257 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:49:52.666672  507257 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:49:52.682228  507257 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37799
	I0116 03:49:52.682743  507257 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:49:52.683402  507257 main.go:141] libmachine: Using API Version  1
	I0116 03:49:52.683425  507257 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:49:52.683719  507257 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36773
	I0116 03:49:52.683893  507257 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:49:52.684125  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetState
	I0116 03:49:52.684589  507257 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:49:52.685108  507257 main.go:141] libmachine: Using API Version  1
	I0116 03:49:52.685128  507257 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:49:52.685607  507257 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:49:52.685627  507257 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42767
	I0116 03:49:52.686073  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetState
	I0116 03:49:52.686329  507257 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:49:52.686781  507257 main.go:141] libmachine: Using API Version  1
	I0116 03:49:52.686804  507257 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:49:52.687167  507257 main.go:141] libmachine: (embed-certs-615980) Calling .DriverName
	I0116 03:49:52.687213  507257 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:49:52.689840  507257 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:49:52.687751  507257 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:49:52.689319  507257 main.go:141] libmachine: (embed-certs-615980) Calling .DriverName
	I0116 03:49:52.691584  507257 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 03:49:52.691595  507257 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0116 03:49:52.691610  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHHostname
	I0116 03:49:52.691655  507257 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:49:52.693170  507257 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0116 03:49:52.694465  507257 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0116 03:49:52.694478  507257 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0116 03:49:52.694495  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHHostname
	I0116 03:49:52.705398  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:49:52.705440  507257 main.go:141] libmachine: (embed-certs-615980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:a6:40", ip: ""} in network mk-embed-certs-615980: {Iface:virbr4 ExpiryTime:2024-01-16 04:44:24 +0000 UTC Type:0 Mac:52:54:00:d4:a6:40 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-615980 Clientid:01:52:54:00:d4:a6:40}
	I0116 03:49:52.705440  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:49:52.705469  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHPort
	I0116 03:49:52.705475  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined IP address 192.168.72.159 and MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:49:52.705501  507257 main.go:141] libmachine: (embed-certs-615980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:a6:40", ip: ""} in network mk-embed-certs-615980: {Iface:virbr4 ExpiryTime:2024-01-16 04:44:24 +0000 UTC Type:0 Mac:52:54:00:d4:a6:40 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-615980 Clientid:01:52:54:00:d4:a6:40}
	I0116 03:49:52.705516  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined IP address 192.168.72.159 and MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:49:52.705403  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHPort
	I0116 03:49:52.705751  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHKeyPath
	I0116 03:49:52.705813  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHKeyPath
	I0116 03:49:52.705956  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHUsername
	I0116 03:49:52.706078  507257 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/embed-certs-615980/id_rsa Username:docker}
	I0116 03:49:52.706839  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHUsername
	I0116 03:49:52.707045  507257 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/embed-certs-615980/id_rsa Username:docker}
	I0116 03:49:52.713247  507257 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33775
	I0116 03:49:52.714047  507257 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:49:52.714725  507257 main.go:141] libmachine: Using API Version  1
	I0116 03:49:52.714742  507257 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:49:52.715212  507257 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:49:52.715442  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetState
	I0116 03:49:52.717568  507257 main.go:141] libmachine: (embed-certs-615980) Calling .DriverName
	I0116 03:49:52.717813  507257 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0116 03:49:52.717824  507257 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0116 03:49:52.717839  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHHostname
	I0116 03:49:52.720720  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:49:52.721189  507257 main.go:141] libmachine: (embed-certs-615980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:a6:40", ip: ""} in network mk-embed-certs-615980: {Iface:virbr4 ExpiryTime:2024-01-16 04:44:24 +0000 UTC Type:0 Mac:52:54:00:d4:a6:40 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-615980 Clientid:01:52:54:00:d4:a6:40}
	I0116 03:49:52.721205  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined IP address 192.168.72.159 and MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:49:52.721414  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHPort
	I0116 03:49:52.721573  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHKeyPath
	I0116 03:49:52.721724  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHUsername
	I0116 03:49:52.721814  507257 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/embed-certs-615980/id_rsa Username:docker}
	I0116 03:49:52.899474  507257 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0116 03:49:52.971597  507257 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0116 03:49:52.971623  507257 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0116 03:49:52.971955  507257 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 03:49:53.029724  507257 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0116 03:49:53.051410  507257 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0116 03:49:53.051439  507257 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0116 03:49:53.121058  507257 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 03:49:53.121088  507257 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0116 03:49:53.179049  507257 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-615980" context rescaled to 1 replicas
	I0116 03:49:53.179098  507257 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.159 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 03:49:53.181191  507257 out.go:177] * Verifying Kubernetes components...
	I0116 03:49:50.633148  507510 pod_ready.go:92] pod "coredns-5644d7b6d9-h85tj" in "kube-system" namespace has status "Ready":"True"
	I0116 03:49:50.633179  507510 pod_ready.go:81] duration metric: took 4.016682348s waiting for pod "coredns-5644d7b6d9-h85tj" in "kube-system" namespace to be "Ready" ...
	I0116 03:49:50.633194  507510 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rc8xt" in "kube-system" namespace to be "Ready" ...
	I0116 03:49:50.648707  507510 pod_ready.go:92] pod "kube-proxy-rc8xt" in "kube-system" namespace has status "Ready":"True"
	I0116 03:49:50.648737  507510 pod_ready.go:81] duration metric: took 15.535257ms waiting for pod "kube-proxy-rc8xt" in "kube-system" namespace to be "Ready" ...
	I0116 03:49:50.648752  507510 pod_ready.go:38] duration metric: took 4.035762868s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:49:50.648770  507510 api_server.go:52] waiting for apiserver process to appear ...
	I0116 03:49:50.648842  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:49:50.665917  507510 api_server.go:72] duration metric: took 5.1323051s to wait for apiserver process to appear ...
	I0116 03:49:50.665954  507510 api_server.go:88] waiting for apiserver healthz status ...
	I0116 03:49:50.665982  507510 api_server.go:253] Checking apiserver healthz at https://192.168.61.167:8443/healthz ...
	I0116 03:49:50.672790  507510 api_server.go:279] https://192.168.61.167:8443/healthz returned 200:
	ok
	I0116 03:49:50.674024  507510 api_server.go:141] control plane version: v1.16.0
	I0116 03:49:50.674059  507510 api_server.go:131] duration metric: took 8.096153ms to wait for apiserver health ...
	I0116 03:49:50.674071  507510 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 03:49:50.677835  507510 system_pods.go:59] 4 kube-system pods found
	I0116 03:49:50.677871  507510 system_pods.go:61] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:49:50.677878  507510 system_pods.go:61] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:49:50.677887  507510 system_pods.go:61] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:49:50.677894  507510 system_pods.go:61] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:49:50.677905  507510 system_pods.go:74] duration metric: took 3.826308ms to wait for pod list to return data ...
	I0116 03:49:50.677914  507510 default_sa.go:34] waiting for default service account to be created ...
	I0116 03:49:50.680932  507510 default_sa.go:45] found service account: "default"
	I0116 03:49:50.680964  507510 default_sa.go:55] duration metric: took 3.041693ms for default service account to be created ...
	I0116 03:49:50.680975  507510 system_pods.go:116] waiting for k8s-apps to be running ...
	I0116 03:49:50.684730  507510 system_pods.go:86] 4 kube-system pods found
	I0116 03:49:50.684759  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:49:50.684767  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:49:50.684778  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:49:50.684785  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:49:50.684811  507510 retry.go:31] will retry after 238.551043ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:49:50.928725  507510 system_pods.go:86] 4 kube-system pods found
	I0116 03:49:50.928761  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:49:50.928768  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:49:50.928779  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:49:50.928786  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:49:50.928816  507510 retry.go:31] will retry after 246.771125ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:49:51.180688  507510 system_pods.go:86] 4 kube-system pods found
	I0116 03:49:51.180727  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:49:51.180736  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:49:51.180747  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:49:51.180755  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:49:51.180780  507510 retry.go:31] will retry after 439.966453ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:49:51.625927  507510 system_pods.go:86] 4 kube-system pods found
	I0116 03:49:51.625958  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:49:51.625964  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:49:51.625970  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:49:51.625975  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:49:51.626001  507510 retry.go:31] will retry after 403.213781ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:49:52.035928  507510 system_pods.go:86] 4 kube-system pods found
	I0116 03:49:52.035994  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:49:52.036003  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:49:52.036014  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:49:52.036022  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:49:52.036064  507510 retry.go:31] will retry after 501.701933ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:49:52.543834  507510 system_pods.go:86] 4 kube-system pods found
	I0116 03:49:52.543874  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:49:52.543883  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:49:52.543894  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:49:52.543904  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:49:52.543929  507510 retry.go:31] will retry after 898.357774ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:49:53.447323  507510 system_pods.go:86] 4 kube-system pods found
	I0116 03:49:53.447356  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:49:53.447364  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:49:53.447373  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:49:53.447382  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:49:53.447405  507510 retry.go:31] will retry after 928.816907ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:49:54.382017  507510 system_pods.go:86] 4 kube-system pods found
	I0116 03:49:54.382046  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:49:54.382052  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:49:54.382058  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:49:54.382065  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:49:54.382085  507510 retry.go:31] will retry after 935.220919ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:49:53.183129  507257 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:49:53.296441  507257 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 03:49:55.162183  507257 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.262649875s)
	I0116 03:49:55.162237  507257 start.go:929] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0116 03:49:55.516930  507257 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.544937669s)
	I0116 03:49:55.516988  507257 main.go:141] libmachine: Making call to close driver server
	I0116 03:49:55.517002  507257 main.go:141] libmachine: (embed-certs-615980) Calling .Close
	I0116 03:49:55.517046  507257 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.487276988s)
	I0116 03:49:55.517101  507257 main.go:141] libmachine: Making call to close driver server
	I0116 03:49:55.517108  507257 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.333941337s)
	I0116 03:49:55.517114  507257 main.go:141] libmachine: (embed-certs-615980) Calling .Close
	I0116 03:49:55.517135  507257 node_ready.go:35] waiting up to 6m0s for node "embed-certs-615980" to be "Ready" ...
	I0116 03:49:55.517496  507257 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:49:55.517496  507257 main.go:141] libmachine: (embed-certs-615980) DBG | Closing plugin on server side
	I0116 03:49:55.517512  507257 main.go:141] libmachine: (embed-certs-615980) DBG | Closing plugin on server side
	I0116 03:49:55.517520  507257 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:49:55.517535  507257 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:49:55.517546  507257 main.go:141] libmachine: Making call to close driver server
	I0116 03:49:55.517548  507257 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:49:55.517559  507257 main.go:141] libmachine: (embed-certs-615980) Calling .Close
	I0116 03:49:55.517566  507257 main.go:141] libmachine: Making call to close driver server
	I0116 03:49:55.517577  507257 main.go:141] libmachine: (embed-certs-615980) Calling .Close
	I0116 03:49:55.517902  507257 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:49:55.517916  507257 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:49:55.517920  507257 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:49:55.517926  507257 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:49:55.537242  507257 node_ready.go:49] node "embed-certs-615980" has status "Ready":"True"
	I0116 03:49:55.537273  507257 node_ready.go:38] duration metric: took 20.119969ms waiting for node "embed-certs-615980" to be "Ready" ...
	I0116 03:49:55.537282  507257 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:49:55.567823  507257 main.go:141] libmachine: Making call to close driver server
	I0116 03:49:55.567859  507257 main.go:141] libmachine: (embed-certs-615980) Calling .Close
	I0116 03:49:55.568264  507257 main.go:141] libmachine: (embed-certs-615980) DBG | Closing plugin on server side
	I0116 03:49:55.568301  507257 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:49:55.568324  507257 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:49:55.571667  507257 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-hxsvz" in "kube-system" namespace to be "Ready" ...
	I0116 03:49:55.962821  507257 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.666330022s)
	I0116 03:49:55.962896  507257 main.go:141] libmachine: Making call to close driver server
	I0116 03:49:55.962915  507257 main.go:141] libmachine: (embed-certs-615980) Calling .Close
	I0116 03:49:55.963282  507257 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:49:55.963304  507257 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:49:55.963317  507257 main.go:141] libmachine: Making call to close driver server
	I0116 03:49:55.963328  507257 main.go:141] libmachine: (embed-certs-615980) Calling .Close
	I0116 03:49:55.964155  507257 main.go:141] libmachine: (embed-certs-615980) DBG | Closing plugin on server side
	I0116 03:49:55.964178  507257 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:49:55.964190  507257 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:49:55.964209  507257 addons.go:470] Verifying addon metrics-server=true in "embed-certs-615980"
	I0116 03:49:55.967489  507257 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0116 03:49:55.969099  507257 addons.go:505] enable addons completed in 3.329750862s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0116 03:49:57.085999  507257 pod_ready.go:92] pod "coredns-5dd5756b68-hxsvz" in "kube-system" namespace has status "Ready":"True"
	I0116 03:49:57.086034  507257 pod_ready.go:81] duration metric: took 1.514340062s waiting for pod "coredns-5dd5756b68-hxsvz" in "kube-system" namespace to be "Ready" ...
	I0116 03:49:57.086048  507257 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-twbhh" in "kube-system" namespace to be "Ready" ...
	I0116 03:49:57.110886  507257 pod_ready.go:92] pod "coredns-5dd5756b68-twbhh" in "kube-system" namespace has status "Ready":"True"
	I0116 03:49:57.110920  507257 pod_ready.go:81] duration metric: took 24.862165ms waiting for pod "coredns-5dd5756b68-twbhh" in "kube-system" namespace to be "Ready" ...
	I0116 03:49:57.110934  507257 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-615980" in "kube-system" namespace to be "Ready" ...
	I0116 03:49:57.122556  507257 pod_ready.go:92] pod "etcd-embed-certs-615980" in "kube-system" namespace has status "Ready":"True"
	I0116 03:49:57.122588  507257 pod_ready.go:81] duration metric: took 11.643561ms waiting for pod "etcd-embed-certs-615980" in "kube-system" namespace to be "Ready" ...
	I0116 03:49:57.122601  507257 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-615980" in "kube-system" namespace to be "Ready" ...
	I0116 03:49:57.134402  507257 pod_ready.go:92] pod "kube-apiserver-embed-certs-615980" in "kube-system" namespace has status "Ready":"True"
	I0116 03:49:57.134432  507257 pod_ready.go:81] duration metric: took 11.823016ms waiting for pod "kube-apiserver-embed-certs-615980" in "kube-system" namespace to be "Ready" ...
	I0116 03:49:57.134442  507257 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-615980" in "kube-system" namespace to be "Ready" ...
	I0116 03:49:57.152947  507257 pod_ready.go:92] pod "kube-controller-manager-embed-certs-615980" in "kube-system" namespace has status "Ready":"True"
	I0116 03:49:57.152984  507257 pod_ready.go:81] duration metric: took 18.533642ms waiting for pod "kube-controller-manager-embed-certs-615980" in "kube-system" namespace to be "Ready" ...
	I0116 03:49:57.153000  507257 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8rkb5" in "kube-system" namespace to be "Ready" ...
	I0116 03:49:57.921983  507257 pod_ready.go:92] pod "kube-proxy-8rkb5" in "kube-system" namespace has status "Ready":"True"
	I0116 03:49:57.922016  507257 pod_ready.go:81] duration metric: took 769.007434ms waiting for pod "kube-proxy-8rkb5" in "kube-system" namespace to be "Ready" ...
	I0116 03:49:57.922028  507257 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-615980" in "kube-system" namespace to be "Ready" ...
	I0116 03:49:58.322237  507257 pod_ready.go:92] pod "kube-scheduler-embed-certs-615980" in "kube-system" namespace has status "Ready":"True"
	I0116 03:49:58.322267  507257 pod_ready.go:81] duration metric: took 400.23243ms waiting for pod "kube-scheduler-embed-certs-615980" in "kube-system" namespace to be "Ready" ...
	I0116 03:49:58.322280  507257 pod_ready.go:38] duration metric: took 2.78498776s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:49:58.322295  507257 api_server.go:52] waiting for apiserver process to appear ...
	I0116 03:49:58.322357  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:49:58.338527  507257 api_server.go:72] duration metric: took 5.159388866s to wait for apiserver process to appear ...
	I0116 03:49:58.338553  507257 api_server.go:88] waiting for apiserver healthz status ...
	I0116 03:49:58.338575  507257 api_server.go:253] Checking apiserver healthz at https://192.168.72.159:8443/healthz ...
	I0116 03:49:58.345758  507257 api_server.go:279] https://192.168.72.159:8443/healthz returned 200:
	ok
	I0116 03:49:58.347531  507257 api_server.go:141] control plane version: v1.28.4
	I0116 03:49:58.347559  507257 api_server.go:131] duration metric: took 8.999388ms to wait for apiserver health ...
	I0116 03:49:58.347573  507257 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 03:49:58.527633  507257 system_pods.go:59] 9 kube-system pods found
	I0116 03:49:58.527676  507257 system_pods.go:61] "coredns-5dd5756b68-hxsvz" [de7da02c-649b-4d29-8a89-5642105b6049] Running
	I0116 03:49:58.527685  507257 system_pods.go:61] "coredns-5dd5756b68-twbhh" [9be49c16-f213-47da-83f4-90fc392eb49e] Running
	I0116 03:49:58.527692  507257 system_pods.go:61] "etcd-embed-certs-615980" [2098148f-0cac-48ce-a607-381b13334438] Running
	I0116 03:49:58.527704  507257 system_pods.go:61] "kube-apiserver-embed-certs-615980" [3d49b47b-da34-4f4d-a8d3-758c0d28c034] Running
	I0116 03:49:58.527711  507257 system_pods.go:61] "kube-controller-manager-embed-certs-615980" [c4f7946d-907d-42ad-8e84-8fa337111688] Running
	I0116 03:49:58.527718  507257 system_pods.go:61] "kube-proxy-8rkb5" [322fae38-3b29-4135-ba3f-c0ff8bda1e4a] Running
	I0116 03:49:58.527725  507257 system_pods.go:61] "kube-scheduler-embed-certs-615980" [882f322f-8686-40a4-a613-e9855ccfb56e] Running
	I0116 03:49:58.527736  507257 system_pods.go:61] "metrics-server-57f55c9bc5-fc7tx" [14a38c13-7a9e-4548-9654-c568ede29e0f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:49:58.527748  507257 system_pods.go:61] "storage-provisioner" [1ce752ad-ce91-462e-ab2b-2af64064eb40] Running
	I0116 03:49:58.527757  507257 system_pods.go:74] duration metric: took 180.177482ms to wait for pod list to return data ...
	I0116 03:49:58.527771  507257 default_sa.go:34] waiting for default service account to be created ...
	I0116 03:49:58.721717  507257 default_sa.go:45] found service account: "default"
	I0116 03:49:58.721749  507257 default_sa.go:55] duration metric: took 193.967755ms for default service account to be created ...
	I0116 03:49:58.721758  507257 system_pods.go:116] waiting for k8s-apps to be running ...
	I0116 03:49:58.925915  507257 system_pods.go:86] 9 kube-system pods found
	I0116 03:49:58.925957  507257 system_pods.go:89] "coredns-5dd5756b68-hxsvz" [de7da02c-649b-4d29-8a89-5642105b6049] Running
	I0116 03:49:58.925964  507257 system_pods.go:89] "coredns-5dd5756b68-twbhh" [9be49c16-f213-47da-83f4-90fc392eb49e] Running
	I0116 03:49:58.925970  507257 system_pods.go:89] "etcd-embed-certs-615980" [2098148f-0cac-48ce-a607-381b13334438] Running
	I0116 03:49:58.925977  507257 system_pods.go:89] "kube-apiserver-embed-certs-615980" [3d49b47b-da34-4f4d-a8d3-758c0d28c034] Running
	I0116 03:49:58.925987  507257 system_pods.go:89] "kube-controller-manager-embed-certs-615980" [c4f7946d-907d-42ad-8e84-8fa337111688] Running
	I0116 03:49:58.925994  507257 system_pods.go:89] "kube-proxy-8rkb5" [322fae38-3b29-4135-ba3f-c0ff8bda1e4a] Running
	I0116 03:49:58.926040  507257 system_pods.go:89] "kube-scheduler-embed-certs-615980" [882f322f-8686-40a4-a613-e9855ccfb56e] Running
	I0116 03:49:58.926063  507257 system_pods.go:89] "metrics-server-57f55c9bc5-fc7tx" [14a38c13-7a9e-4548-9654-c568ede29e0f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:49:58.926070  507257 system_pods.go:89] "storage-provisioner" [1ce752ad-ce91-462e-ab2b-2af64064eb40] Running
	I0116 03:49:58.926087  507257 system_pods.go:126] duration metric: took 204.321811ms to wait for k8s-apps to be running ...
	I0116 03:49:58.926099  507257 system_svc.go:44] waiting for kubelet service to be running ....
	I0116 03:49:58.926159  507257 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:49:58.940982  507257 system_svc.go:56] duration metric: took 14.86844ms WaitForService to wait for kubelet.
	I0116 03:49:58.941019  507257 kubeadm.go:581] duration metric: took 5.761889406s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0116 03:49:58.941051  507257 node_conditions.go:102] verifying NodePressure condition ...
	I0116 03:49:59.121649  507257 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 03:49:59.121681  507257 node_conditions.go:123] node cpu capacity is 2
	I0116 03:49:59.121694  507257 node_conditions.go:105] duration metric: took 180.636851ms to run NodePressure ...
	I0116 03:49:59.121707  507257 start.go:228] waiting for startup goroutines ...
	I0116 03:49:59.121717  507257 start.go:233] waiting for cluster config update ...
	I0116 03:49:59.121730  507257 start.go:242] writing updated cluster config ...
	I0116 03:49:59.122058  507257 ssh_runner.go:195] Run: rm -f paused
	I0116 03:49:59.177472  507257 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0116 03:49:59.179801  507257 out.go:177] * Done! kubectl is now configured to use "embed-certs-615980" cluster and "default" namespace by default
	I0116 03:49:55.324439  507510 system_pods.go:86] 4 kube-system pods found
	I0116 03:49:55.324471  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:49:55.324477  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:49:55.324484  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:49:55.324489  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:49:55.324509  507510 retry.go:31] will retry after 1.168298317s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:49:56.500050  507510 system_pods.go:86] 4 kube-system pods found
	I0116 03:49:56.500090  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:49:56.500098  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:49:56.500111  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:49:56.500118  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:49:56.500142  507510 retry.go:31] will retry after 1.453657977s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:49:57.961220  507510 system_pods.go:86] 4 kube-system pods found
	I0116 03:49:57.961248  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:49:57.961254  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:49:57.961261  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:49:57.961266  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:49:57.961286  507510 retry.go:31] will retry after 1.763969687s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:49:59.731086  507510 system_pods.go:86] 4 kube-system pods found
	I0116 03:49:59.731112  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:49:59.731117  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:49:59.731123  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:49:59.731129  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:49:59.731147  507510 retry.go:31] will retry after 3.185395035s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:50:02.922897  507510 system_pods.go:86] 4 kube-system pods found
	I0116 03:50:02.922934  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:50:02.922944  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:50:02.922954  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:50:02.922961  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:50:02.922985  507510 retry.go:31] will retry after 4.049428323s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:50:06.978002  507510 system_pods.go:86] 4 kube-system pods found
	I0116 03:50:06.978029  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:50:06.978034  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:50:06.978040  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:50:06.978045  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:50:06.978063  507510 retry.go:31] will retry after 4.626513574s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:50:11.610464  507510 system_pods.go:86] 4 kube-system pods found
	I0116 03:50:11.610499  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:50:11.610507  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:50:11.610517  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:50:11.610524  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:50:11.610550  507510 retry.go:31] will retry after 4.683195792s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:50:16.298843  507510 system_pods.go:86] 4 kube-system pods found
	I0116 03:50:16.298873  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:50:16.298879  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:50:16.298888  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:50:16.298892  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:50:16.298913  507510 retry.go:31] will retry after 8.214175219s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:50:24.520982  507510 system_pods.go:86] 5 kube-system pods found
	I0116 03:50:24.521020  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:50:24.521029  507510 system_pods.go:89] "etcd-old-k8s-version-696770" [255a0b2a-df41-4621-8918-36e1b0c25c24] Pending
	I0116 03:50:24.521033  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:50:24.521040  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:50:24.521045  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:50:24.521067  507510 retry.go:31] will retry after 9.626598035s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:50:34.155753  507510 system_pods.go:86] 5 kube-system pods found
	I0116 03:50:34.155790  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:50:34.155798  507510 system_pods.go:89] "etcd-old-k8s-version-696770" [255a0b2a-df41-4621-8918-36e1b0c25c24] Running
	I0116 03:50:34.155805  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:50:34.155815  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:50:34.155822  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:50:34.155849  507510 retry.go:31] will retry after 13.760629262s: missing components: kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:50:47.923537  507510 system_pods.go:86] 7 kube-system pods found
	I0116 03:50:47.923571  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:50:47.923577  507510 system_pods.go:89] "etcd-old-k8s-version-696770" [255a0b2a-df41-4621-8918-36e1b0c25c24] Running
	I0116 03:50:47.923582  507510 system_pods.go:89] "kube-apiserver-old-k8s-version-696770" [c682b257-d00b-4b4c-8089-cda1b9da538c] Running
	I0116 03:50:47.923585  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:50:47.923589  507510 system_pods.go:89] "kube-scheduler-old-k8s-version-696770" [af271425-aec7-45d9-97c5-9a033f13a41e] Running
	I0116 03:50:47.923599  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:50:47.923603  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:50:47.923621  507510 retry.go:31] will retry after 15.810378345s: missing components: kube-controller-manager
	I0116 03:51:03.742786  507510 system_pods.go:86] 8 kube-system pods found
	I0116 03:51:03.742819  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:51:03.742825  507510 system_pods.go:89] "etcd-old-k8s-version-696770" [255a0b2a-df41-4621-8918-36e1b0c25c24] Running
	I0116 03:51:03.742830  507510 system_pods.go:89] "kube-apiserver-old-k8s-version-696770" [c682b257-d00b-4b4c-8089-cda1b9da538c] Running
	I0116 03:51:03.742835  507510 system_pods.go:89] "kube-controller-manager-old-k8s-version-696770" [87b5ef82-182e-458d-b521-05a36d3d031b] Running
	I0116 03:51:03.742838  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:51:03.742842  507510 system_pods.go:89] "kube-scheduler-old-k8s-version-696770" [af271425-aec7-45d9-97c5-9a033f13a41e] Running
	I0116 03:51:03.742849  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:51:03.742854  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:51:03.742865  507510 system_pods.go:126] duration metric: took 1m13.061883389s to wait for k8s-apps to be running ...
	I0116 03:51:03.742872  507510 system_svc.go:44] waiting for kubelet service to be running ....
	I0116 03:51:03.742921  507510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:51:03.761399  507510 system_svc.go:56] duration metric: took 18.514586ms WaitForService to wait for kubelet.
	I0116 03:51:03.761433  507510 kubeadm.go:581] duration metric: took 1m18.22783177s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0116 03:51:03.761461  507510 node_conditions.go:102] verifying NodePressure condition ...
	I0116 03:51:03.765716  507510 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 03:51:03.765760  507510 node_conditions.go:123] node cpu capacity is 2
	I0116 03:51:03.765777  507510 node_conditions.go:105] duration metric: took 4.309124ms to run NodePressure ...
	I0116 03:51:03.765794  507510 start.go:228] waiting for startup goroutines ...
	I0116 03:51:03.765803  507510 start.go:233] waiting for cluster config update ...
	I0116 03:51:03.765817  507510 start.go:242] writing updated cluster config ...
	I0116 03:51:03.766160  507510 ssh_runner.go:195] Run: rm -f paused
	I0116 03:51:03.822502  507510 start.go:600] kubectl: 1.29.0, cluster: 1.16.0 (minor skew: 13)
	I0116 03:51:03.824687  507510 out.go:177] 
	W0116 03:51:03.826162  507510 out.go:239] ! /usr/local/bin/kubectl is version 1.29.0, which may have incompatibilities with Kubernetes 1.16.0.
	I0116 03:51:03.827659  507510 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0116 03:51:03.829229  507510 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-696770" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Tue 2024-01-16 03:44:00 UTC, ends at Tue 2024-01-16 03:58:06 UTC. --
	Jan 16 03:58:06 default-k8s-diff-port-434445 crio[734]: time="2024-01-16 03:58:06.489396020Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=2dd2dcea-9195-40fe-b4bd-d6918e863a38 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:58:06 default-k8s-diff-port-434445 crio[734]: time="2024-01-16 03:58:06.489613047Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2bc79d2ed159177045ced7d31622ca72da51b64db46b8371b62d9f4fdd3e34a3,PodSandboxId:4561671f5fcb007566d4db43fecf2846c64dc43235451e5f6b0f65b582f95b10,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1705376688774713895,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3f347086-cbef-4c9e-b11c-1a72f9c19ae7,},Annotations:map[string]string{io.kubernetes.container.hash: 5da410ea,io.kubernetes.container.restartCount: 1,io.kubernetes
.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a07ae23e6e9e3c589264d0be7095896549a4b71fa2d0b06805a3b1b3bea16f6a,PodSandboxId:b6d30bb49a20301387ae7d8e9e003dd1b636d0a9dfcda82b07590a91cbcdde66,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705376687094229686,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pmx8n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d3c9941-8938-4d4a-b6d8-9b542ca6f1ca,},Annotations:map[string]string{io.kubernetes.container.hash: 77baf89e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33ba3a03d878abc86a194defd715bb61a0066e49063e3823002bd3ec421da138,PodSandboxId:99327b7ab530194d5cd29db66547aa5b2145dc98a9c91851c9b0be0b0559744a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705376680388302973,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 16fd4585-3d75-40c3-a28d-4134375f4e3d,},Annotations:map[string]string{io.kubernetes.container.hash: 4cba5769,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4b27881ef90cea325e8f306fac88a76052eec15a693c291288664b8f4ebcc2a,PodSandboxId:99327b7ab530194d5cd29db66547aa5b2145dc98a9c91851c9b0be0b0559744a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1705376679282926091,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 16fd4585-3d75-40c3-a28d-4134375f4e3d,},Annotations:map[string]string{io.kubernetes.container.hash: 4cba5769,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44f71a7069827d97c7326cbd36638ec36ff39cbbe0a1e4e61e7c010a38d2e123,PodSandboxId:cf8dd051894cf58df172502fa9f75fb2d8f730055919321a8de103caf178242e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705376679281404110,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dcbqg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
ba1f9bf-6aa7-40cd-b57c-745c2d0cc414,},Annotations:map[string]string{io.kubernetes.container.hash: 1d2c9957,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2758ac4468b10765b35629ffc54228655bc18f79a654cc03e91929ebdc3cf90,PodSandboxId:783287f7b4e9cf031d72eb66efe436eba5ab0a30f24ebb043333f6ff3807d918,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705376673348681964,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-434445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 912de32c93aa16eea0b5111acb3790b0,},An
notations:map[string]string{io.kubernetes.container.hash: db6a5abf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e60387e0e2800d67c99eb3566f31ad675c0db6c22379fc28ee9a447af5f5023c,PodSandboxId:9da14cfaa8df2939b5d42680f6cfbe488680ccdc33024aa69d28f299aee16e81,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705376672824748403,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-434445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5781851edbe2deb41d2d85e284e5498,},An
notations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1438a3832328a03b94c3d408a2e332c0fb0adab915a9d4f26994e84c0509fac9,PodSandboxId:046c0722a41e06a9d2a31bed7e3a5ed7d20aa4471027282eb3b81ce385d51607,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705376672383994818,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-434445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
69452d8d25407a36c42c29e7263d7a5,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9861ff0fbab72660648bfcb49b82810bada99f73a55cb0aa359ba114825b8f3,PodSandboxId:83a160f9a9ab53bd3efcf9446a3cb64629883944e6b11993834ed1cba2cd3565,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705376672261780520,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-434445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8
ea8b0a4a0eac607795856ec116732b7,},Annotations:map[string]string{io.kubernetes.container.hash: 1c54be68,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=2dd2dcea-9195-40fe-b4bd-d6918e863a38 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:58:06 default-k8s-diff-port-434445 crio[734]: time="2024-01-16 03:58:06.512385919Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=bf173190-7c3f-460c-b6da-e11e1e4082da name=/runtime.v1.RuntimeService/Version
	Jan 16 03:58:06 default-k8s-diff-port-434445 crio[734]: time="2024-01-16 03:58:06.512444865Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=bf173190-7c3f-460c-b6da-e11e1e4082da name=/runtime.v1.RuntimeService/Version
	Jan 16 03:58:06 default-k8s-diff-port-434445 crio[734]: time="2024-01-16 03:58:06.514834031Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=a7dd1a87-36e2-4326-b08e-b6a0fdd37c27 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 03:58:06 default-k8s-diff-port-434445 crio[734]: time="2024-01-16 03:58:06.515457793Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705377486515426215,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=a7dd1a87-36e2-4326-b08e-b6a0fdd37c27 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 03:58:06 default-k8s-diff-port-434445 crio[734]: time="2024-01-16 03:58:06.516496688Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6151dadc-89b2-4eb1-8fd6-21eb9015c180 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:58:06 default-k8s-diff-port-434445 crio[734]: time="2024-01-16 03:58:06.516602749Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=6151dadc-89b2-4eb1-8fd6-21eb9015c180 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:58:06 default-k8s-diff-port-434445 crio[734]: time="2024-01-16 03:58:06.516878724Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2bc79d2ed159177045ced7d31622ca72da51b64db46b8371b62d9f4fdd3e34a3,PodSandboxId:4561671f5fcb007566d4db43fecf2846c64dc43235451e5f6b0f65b582f95b10,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1705376688774713895,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3f347086-cbef-4c9e-b11c-1a72f9c19ae7,},Annotations:map[string]string{io.kubernetes.container.hash: 5da410ea,io.kubernetes.container.restartCount: 1,io.kubernetes
.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a07ae23e6e9e3c589264d0be7095896549a4b71fa2d0b06805a3b1b3bea16f6a,PodSandboxId:b6d30bb49a20301387ae7d8e9e003dd1b636d0a9dfcda82b07590a91cbcdde66,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705376687094229686,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pmx8n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d3c9941-8938-4d4a-b6d8-9b542ca6f1ca,},Annotations:map[string]string{io.kubernetes.container.hash: 77baf89e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33ba3a03d878abc86a194defd715bb61a0066e49063e3823002bd3ec421da138,PodSandboxId:99327b7ab530194d5cd29db66547aa5b2145dc98a9c91851c9b0be0b0559744a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705376680388302973,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 16fd4585-3d75-40c3-a28d-4134375f4e3d,},Annotations:map[string]string{io.kubernetes.container.hash: 4cba5769,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4b27881ef90cea325e8f306fac88a76052eec15a693c291288664b8f4ebcc2a,PodSandboxId:99327b7ab530194d5cd29db66547aa5b2145dc98a9c91851c9b0be0b0559744a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1705376679282926091,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 16fd4585-3d75-40c3-a28d-4134375f4e3d,},Annotations:map[string]string{io.kubernetes.container.hash: 4cba5769,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44f71a7069827d97c7326cbd36638ec36ff39cbbe0a1e4e61e7c010a38d2e123,PodSandboxId:cf8dd051894cf58df172502fa9f75fb2d8f730055919321a8de103caf178242e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705376679281404110,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dcbqg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
ba1f9bf-6aa7-40cd-b57c-745c2d0cc414,},Annotations:map[string]string{io.kubernetes.container.hash: 1d2c9957,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2758ac4468b10765b35629ffc54228655bc18f79a654cc03e91929ebdc3cf90,PodSandboxId:783287f7b4e9cf031d72eb66efe436eba5ab0a30f24ebb043333f6ff3807d918,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705376673348681964,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-434445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 912de32c93aa16eea0b5111acb3790b0,},An
notations:map[string]string{io.kubernetes.container.hash: db6a5abf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e60387e0e2800d67c99eb3566f31ad675c0db6c22379fc28ee9a447af5f5023c,PodSandboxId:9da14cfaa8df2939b5d42680f6cfbe488680ccdc33024aa69d28f299aee16e81,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705376672824748403,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-434445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5781851edbe2deb41d2d85e284e5498,},An
notations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1438a3832328a03b94c3d408a2e332c0fb0adab915a9d4f26994e84c0509fac9,PodSandboxId:046c0722a41e06a9d2a31bed7e3a5ed7d20aa4471027282eb3b81ce385d51607,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705376672383994818,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-434445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
69452d8d25407a36c42c29e7263d7a5,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9861ff0fbab72660648bfcb49b82810bada99f73a55cb0aa359ba114825b8f3,PodSandboxId:83a160f9a9ab53bd3efcf9446a3cb64629883944e6b11993834ed1cba2cd3565,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705376672261780520,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-434445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8
ea8b0a4a0eac607795856ec116732b7,},Annotations:map[string]string{io.kubernetes.container.hash: 1c54be68,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6151dadc-89b2-4eb1-8fd6-21eb9015c180 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:58:06 default-k8s-diff-port-434445 crio[734]: time="2024-01-16 03:58:06.542698762Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="go-grpc-middleware/chain.go:25" id=88165053-f345-4dd9-81d0-d2e26d454ef4 name=/runtime.v1.RuntimeService/Status
	Jan 16 03:58:06 default-k8s-diff-port-434445 crio[734]: time="2024-01-16 03:58:06.542808249Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=88165053-f345-4dd9-81d0-d2e26d454ef4 name=/runtime.v1.RuntimeService/Status
	Jan 16 03:58:06 default-k8s-diff-port-434445 crio[734]: time="2024-01-16 03:58:06.566013436Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=47bcbe17-bf06-41ef-83a1-d0c2cda8a9f1 name=/runtime.v1.RuntimeService/Version
	Jan 16 03:58:06 default-k8s-diff-port-434445 crio[734]: time="2024-01-16 03:58:06.566150023Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=47bcbe17-bf06-41ef-83a1-d0c2cda8a9f1 name=/runtime.v1.RuntimeService/Version
	Jan 16 03:58:06 default-k8s-diff-port-434445 crio[734]: time="2024-01-16 03:58:06.568445345Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=2fda0789-336c-4ed3-af80-e79b7516752b name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 03:58:06 default-k8s-diff-port-434445 crio[734]: time="2024-01-16 03:58:06.568975146Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705377486568957778,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=2fda0789-336c-4ed3-af80-e79b7516752b name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 03:58:06 default-k8s-diff-port-434445 crio[734]: time="2024-01-16 03:58:06.570010754Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=de64d5f3-a01a-47fa-84b0-f08ad800d689 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:58:06 default-k8s-diff-port-434445 crio[734]: time="2024-01-16 03:58:06.570175854Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=de64d5f3-a01a-47fa-84b0-f08ad800d689 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:58:06 default-k8s-diff-port-434445 crio[734]: time="2024-01-16 03:58:06.570409279Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2bc79d2ed159177045ced7d31622ca72da51b64db46b8371b62d9f4fdd3e34a3,PodSandboxId:4561671f5fcb007566d4db43fecf2846c64dc43235451e5f6b0f65b582f95b10,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1705376688774713895,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3f347086-cbef-4c9e-b11c-1a72f9c19ae7,},Annotations:map[string]string{io.kubernetes.container.hash: 5da410ea,io.kubernetes.container.restartCount: 1,io.kubernetes
.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a07ae23e6e9e3c589264d0be7095896549a4b71fa2d0b06805a3b1b3bea16f6a,PodSandboxId:b6d30bb49a20301387ae7d8e9e003dd1b636d0a9dfcda82b07590a91cbcdde66,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705376687094229686,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pmx8n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d3c9941-8938-4d4a-b6d8-9b542ca6f1ca,},Annotations:map[string]string{io.kubernetes.container.hash: 77baf89e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33ba3a03d878abc86a194defd715bb61a0066e49063e3823002bd3ec421da138,PodSandboxId:99327b7ab530194d5cd29db66547aa5b2145dc98a9c91851c9b0be0b0559744a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705376680388302973,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 16fd4585-3d75-40c3-a28d-4134375f4e3d,},Annotations:map[string]string{io.kubernetes.container.hash: 4cba5769,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4b27881ef90cea325e8f306fac88a76052eec15a693c291288664b8f4ebcc2a,PodSandboxId:99327b7ab530194d5cd29db66547aa5b2145dc98a9c91851c9b0be0b0559744a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1705376679282926091,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 16fd4585-3d75-40c3-a28d-4134375f4e3d,},Annotations:map[string]string{io.kubernetes.container.hash: 4cba5769,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44f71a7069827d97c7326cbd36638ec36ff39cbbe0a1e4e61e7c010a38d2e123,PodSandboxId:cf8dd051894cf58df172502fa9f75fb2d8f730055919321a8de103caf178242e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705376679281404110,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dcbqg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
ba1f9bf-6aa7-40cd-b57c-745c2d0cc414,},Annotations:map[string]string{io.kubernetes.container.hash: 1d2c9957,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2758ac4468b10765b35629ffc54228655bc18f79a654cc03e91929ebdc3cf90,PodSandboxId:783287f7b4e9cf031d72eb66efe436eba5ab0a30f24ebb043333f6ff3807d918,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705376673348681964,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-434445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 912de32c93aa16eea0b5111acb3790b0,},An
notations:map[string]string{io.kubernetes.container.hash: db6a5abf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e60387e0e2800d67c99eb3566f31ad675c0db6c22379fc28ee9a447af5f5023c,PodSandboxId:9da14cfaa8df2939b5d42680f6cfbe488680ccdc33024aa69d28f299aee16e81,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705376672824748403,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-434445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5781851edbe2deb41d2d85e284e5498,},An
notations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1438a3832328a03b94c3d408a2e332c0fb0adab915a9d4f26994e84c0509fac9,PodSandboxId:046c0722a41e06a9d2a31bed7e3a5ed7d20aa4471027282eb3b81ce385d51607,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705376672383994818,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-434445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
69452d8d25407a36c42c29e7263d7a5,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9861ff0fbab72660648bfcb49b82810bada99f73a55cb0aa359ba114825b8f3,PodSandboxId:83a160f9a9ab53bd3efcf9446a3cb64629883944e6b11993834ed1cba2cd3565,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705376672261780520,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-434445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8
ea8b0a4a0eac607795856ec116732b7,},Annotations:map[string]string{io.kubernetes.container.hash: 1c54be68,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=de64d5f3-a01a-47fa-84b0-f08ad800d689 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:58:06 default-k8s-diff-port-434445 crio[734]: time="2024-01-16 03:58:06.616391910Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=13a66cc8-c0fb-43fa-8f37-5d9e0c5521ec name=/runtime.v1.RuntimeService/Version
	Jan 16 03:58:06 default-k8s-diff-port-434445 crio[734]: time="2024-01-16 03:58:06.616451590Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=13a66cc8-c0fb-43fa-8f37-5d9e0c5521ec name=/runtime.v1.RuntimeService/Version
	Jan 16 03:58:06 default-k8s-diff-port-434445 crio[734]: time="2024-01-16 03:58:06.618837981Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=59425d41-1ef8-4125-8ca4-a134811c42a2 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 03:58:06 default-k8s-diff-port-434445 crio[734]: time="2024-01-16 03:58:06.619608259Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705377486619583911,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=59425d41-1ef8-4125-8ca4-a134811c42a2 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 03:58:06 default-k8s-diff-port-434445 crio[734]: time="2024-01-16 03:58:06.620811797Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=939640cf-af7d-472b-8e20-c9ff76e3e877 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:58:06 default-k8s-diff-port-434445 crio[734]: time="2024-01-16 03:58:06.620862868Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=939640cf-af7d-472b-8e20-c9ff76e3e877 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:58:06 default-k8s-diff-port-434445 crio[734]: time="2024-01-16 03:58:06.621172139Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2bc79d2ed159177045ced7d31622ca72da51b64db46b8371b62d9f4fdd3e34a3,PodSandboxId:4561671f5fcb007566d4db43fecf2846c64dc43235451e5f6b0f65b582f95b10,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1705376688774713895,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3f347086-cbef-4c9e-b11c-1a72f9c19ae7,},Annotations:map[string]string{io.kubernetes.container.hash: 5da410ea,io.kubernetes.container.restartCount: 1,io.kubernetes
.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a07ae23e6e9e3c589264d0be7095896549a4b71fa2d0b06805a3b1b3bea16f6a,PodSandboxId:b6d30bb49a20301387ae7d8e9e003dd1b636d0a9dfcda82b07590a91cbcdde66,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705376687094229686,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pmx8n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d3c9941-8938-4d4a-b6d8-9b542ca6f1ca,},Annotations:map[string]string{io.kubernetes.container.hash: 77baf89e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33ba3a03d878abc86a194defd715bb61a0066e49063e3823002bd3ec421da138,PodSandboxId:99327b7ab530194d5cd29db66547aa5b2145dc98a9c91851c9b0be0b0559744a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705376680388302973,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 16fd4585-3d75-40c3-a28d-4134375f4e3d,},Annotations:map[string]string{io.kubernetes.container.hash: 4cba5769,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4b27881ef90cea325e8f306fac88a76052eec15a693c291288664b8f4ebcc2a,PodSandboxId:99327b7ab530194d5cd29db66547aa5b2145dc98a9c91851c9b0be0b0559744a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1705376679282926091,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 16fd4585-3d75-40c3-a28d-4134375f4e3d,},Annotations:map[string]string{io.kubernetes.container.hash: 4cba5769,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44f71a7069827d97c7326cbd36638ec36ff39cbbe0a1e4e61e7c010a38d2e123,PodSandboxId:cf8dd051894cf58df172502fa9f75fb2d8f730055919321a8de103caf178242e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705376679281404110,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dcbqg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
ba1f9bf-6aa7-40cd-b57c-745c2d0cc414,},Annotations:map[string]string{io.kubernetes.container.hash: 1d2c9957,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2758ac4468b10765b35629ffc54228655bc18f79a654cc03e91929ebdc3cf90,PodSandboxId:783287f7b4e9cf031d72eb66efe436eba5ab0a30f24ebb043333f6ff3807d918,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705376673348681964,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-434445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 912de32c93aa16eea0b5111acb3790b0,},An
notations:map[string]string{io.kubernetes.container.hash: db6a5abf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e60387e0e2800d67c99eb3566f31ad675c0db6c22379fc28ee9a447af5f5023c,PodSandboxId:9da14cfaa8df2939b5d42680f6cfbe488680ccdc33024aa69d28f299aee16e81,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705376672824748403,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-434445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5781851edbe2deb41d2d85e284e5498,},An
notations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1438a3832328a03b94c3d408a2e332c0fb0adab915a9d4f26994e84c0509fac9,PodSandboxId:046c0722a41e06a9d2a31bed7e3a5ed7d20aa4471027282eb3b81ce385d51607,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705376672383994818,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-434445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
69452d8d25407a36c42c29e7263d7a5,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9861ff0fbab72660648bfcb49b82810bada99f73a55cb0aa359ba114825b8f3,PodSandboxId:83a160f9a9ab53bd3efcf9446a3cb64629883944e6b11993834ed1cba2cd3565,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705376672261780520,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-434445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8
ea8b0a4a0eac607795856ec116732b7,},Annotations:map[string]string{io.kubernetes.container.hash: 1c54be68,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=939640cf-af7d-472b-8e20-c9ff76e3e877 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2bc79d2ed1591       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   4561671f5fcb0       busybox
	a07ae23e6e9e3       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      13 minutes ago      Running             coredns                   1                   b6d30bb49a203       coredns-5dd5756b68-pmx8n
	33ba3a03d878a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Running             storage-provisioner       3                   99327b7ab5301       storage-provisioner
	a4b27881ef90c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       2                   99327b7ab5301       storage-provisioner
	44f71a7069827       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      13 minutes ago      Running             kube-proxy                1                   cf8dd051894cf       kube-proxy-dcbqg
	e2758ac4468b1       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      13 minutes ago      Running             etcd                      1                   783287f7b4e9c       etcd-default-k8s-diff-port-434445
	e60387e0e2800       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      13 minutes ago      Running             kube-scheduler            1                   9da14cfaa8df2       kube-scheduler-default-k8s-diff-port-434445
	1438a3832328a       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      13 minutes ago      Running             kube-controller-manager   1                   046c0722a41e0       kube-controller-manager-default-k8s-diff-port-434445
	f9861ff0fbab7       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      13 minutes ago      Running             kube-apiserver            1                   83a160f9a9ab5       kube-apiserver-default-k8s-diff-port-434445
	
	
	==> coredns [a07ae23e6e9e3c589264d0be7095896549a4b71fa2d0b06805a3b1b3bea16f6a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:57065 - 55204 "HINFO IN 8254892050912566778.576422238651280398. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.008852051s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-434445
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-434445
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e8fa5f64d0e7272be43ff25ed3826261f0a2578
	                    minikube.k8s.io/name=default-k8s-diff-port-434445
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_16T03_37_14_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Jan 2024 03:37:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-434445
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Jan 2024 03:58:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Jan 2024 03:55:20 +0000   Tue, 16 Jan 2024 03:37:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Jan 2024 03:55:20 +0000   Tue, 16 Jan 2024 03:37:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Jan 2024 03:55:20 +0000   Tue, 16 Jan 2024 03:37:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Jan 2024 03:55:20 +0000   Tue, 16 Jan 2024 03:44:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.236
	  Hostname:    default-k8s-diff-port-434445
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 163fbf991c964ac0a0d338e8efd64b6b
	  System UUID:                163fbf99-1c96-4ac0-a0d3-38e8efd64b6b
	  Boot ID:                    8cd7e9b2-7d8c-46ff-a75a-a4d21eb06250
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 coredns-5dd5756b68-pmx8n                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     20m
	  kube-system                 etcd-default-k8s-diff-port-434445                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         20m
	  kube-system                 kube-apiserver-default-k8s-diff-port-434445             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-434445    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kube-proxy-dcbqg                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kube-scheduler-default-k8s-diff-port-434445             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 metrics-server-57f55c9bc5-894n2                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         20m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 20m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  20m                kubelet          Node default-k8s-diff-port-434445 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20m                kubelet          Node default-k8s-diff-port-434445 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20m                kubelet          Node default-k8s-diff-port-434445 status is now: NodeHasSufficientPID
	  Normal  Starting                 20m                kubelet          Starting kubelet.
	  Normal  NodeReady                20m                kubelet          Node default-k8s-diff-port-434445 status is now: NodeReady
	  Normal  RegisteredNode           20m                node-controller  Node default-k8s-diff-port-434445 event: Registered Node default-k8s-diff-port-434445 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-434445 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-434445 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node default-k8s-diff-port-434445 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node default-k8s-diff-port-434445 event: Registered Node default-k8s-diff-port-434445 in Controller
	
	
	==> dmesg <==
	[Jan16 03:43] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.087035] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.752421] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.467470] systemd-fstab-generator[113]: Ignoring "noauto" for root device
	[  +0.142678] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[Jan16 03:44] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.901294] systemd-fstab-generator[661]: Ignoring "noauto" for root device
	[  +0.160476] systemd-fstab-generator[672]: Ignoring "noauto" for root device
	[  +0.188007] systemd-fstab-generator[685]: Ignoring "noauto" for root device
	[  +0.111181] systemd-fstab-generator[696]: Ignoring "noauto" for root device
	[  +0.250435] systemd-fstab-generator[720]: Ignoring "noauto" for root device
	[ +18.727963] systemd-fstab-generator[933]: Ignoring "noauto" for root device
	[ +15.002290] kauditd_printk_skb: 19 callbacks suppressed
	[  +9.621656] kauditd_printk_skb: 9 callbacks suppressed
	
	
	==> etcd [e2758ac4468b10765b35629ffc54228655bc18f79a654cc03e91929ebdc3cf90] <==
	{"level":"info","ts":"2024-01-16T03:44:41.393222Z","caller":"traceutil/trace.go:171","msg":"trace[406826698] linearizableReadLoop","detail":"{readStateIndex:492; appliedIndex:491; }","duration":"812.852903ms","start":"2024-01-16T03:44:40.580344Z","end":"2024-01-16T03:44:41.393197Z","steps":["trace[406826698] 'read index received'  (duration: 416.041822ms)","trace[406826698] 'applied index is now lower than readState.Index'  (duration: 396.809619ms)"],"step_count":2}
	{"level":"info","ts":"2024-01-16T03:44:41.39336Z","caller":"traceutil/trace.go:171","msg":"trace[854632994] transaction","detail":"{read_only:false; response_revision:462; number_of_response:1; }","duration":"991.830993ms","start":"2024-01-16T03:44:40.401521Z","end":"2024-01-16T03:44:41.393352Z","steps":["trace[854632994] 'process raft request'  (duration: 594.967708ms)","trace[854632994] 'compare'  (duration: 392.912682ms)"],"step_count":2}
	{"level":"warn","ts":"2024-01-16T03:44:41.393931Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-16T03:44:40.4015Z","time spent":"992.358102ms","remote":"127.0.0.1:59200","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3857,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/storage-provisioner\" mod_revision:456 > success:<request_put:<key:\"/registry/pods/kube-system/storage-provisioner\" value_size:3803 >> failure:<request_range:<key:\"/registry/pods/kube-system/storage-provisioner\" > >"}
	{"level":"warn","ts":"2024-01-16T03:44:41.393742Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"813.391316ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/default-k8s-diff-port-434445\" ","response":"range_response_count:1 size:5714"}
	{"level":"info","ts":"2024-01-16T03:44:41.394241Z","caller":"traceutil/trace.go:171","msg":"trace[1874680004] range","detail":"{range_begin:/registry/minions/default-k8s-diff-port-434445; range_end:; response_count:1; response_revision:462; }","duration":"813.91078ms","start":"2024-01-16T03:44:40.58032Z","end":"2024-01-16T03:44:41.394231Z","steps":["trace[1874680004] 'agreement among raft nodes before linearized reading'  (duration: 813.366031ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T03:44:41.394302Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-16T03:44:40.580307Z","time spent":"813.985753ms","remote":"127.0.0.1:59198","response type":"/etcdserverpb.KV/Range","request count":0,"request size":48,"response count":1,"response size":5736,"request content":"key:\"/registry/minions/default-k8s-diff-port-434445\" "}
	{"level":"warn","ts":"2024-01-16T03:44:42.515937Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"887.193643ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" ","response":"range_response_count:1 size:3937"}
	{"level":"info","ts":"2024-01-16T03:44:42.518608Z","caller":"traceutil/trace.go:171","msg":"trace[360234414] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:462; }","duration":"889.865315ms","start":"2024-01-16T03:44:41.628723Z","end":"2024-01-16T03:44:42.518589Z","steps":["trace[360234414] 'range keys from in-memory index tree'  (duration: 887.00759ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T03:44:42.518693Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-16T03:44:41.628705Z","time spent":"889.968516ms","remote":"127.0.0.1:59260","response type":"/etcdserverpb.KV/Range","request count":0,"request size":43,"response count":1,"response size":3959,"request content":"key:\"/registry/deployments/kube-system/coredns\" "}
	{"level":"warn","ts":"2024-01-16T03:44:42.518001Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"693.717109ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/coredns\" ","response":"range_response_count:1 size:764"}
	{"level":"info","ts":"2024-01-16T03:44:42.518925Z","caller":"traceutil/trace.go:171","msg":"trace[524858932] range","detail":"{range_begin:/registry/configmaps/kube-system/coredns; range_end:; response_count:1; response_revision:462; }","duration":"694.642368ms","start":"2024-01-16T03:44:41.824267Z","end":"2024-01-16T03:44:42.518909Z","steps":["trace[524858932] 'range keys from in-memory index tree'  (duration: 693.60288ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T03:44:42.518968Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-16T03:44:41.824249Z","time spent":"694.701921ms","remote":"127.0.0.1:59180","response type":"/etcdserverpb.KV/Range","request count":0,"request size":42,"response count":1,"response size":786,"request content":"key:\"/registry/configmaps/kube-system/coredns\" "}
	{"level":"info","ts":"2024-01-16T03:44:42.518343Z","caller":"traceutil/trace.go:171","msg":"trace[2123049412] linearizableReadLoop","detail":"{readStateIndex:493; appliedIndex:492; }","duration":"152.19746ms","start":"2024-01-16T03:44:42.366123Z","end":"2024-01-16T03:44:42.51832Z","steps":["trace[2123049412] 'read index received'  (duration: 112.346195ms)","trace[2123049412] 'applied index is now lower than readState.Index'  (duration: 39.850068ms)"],"step_count":2}
	{"level":"warn","ts":"2024-01-16T03:44:42.518486Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"152.424475ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/storage-provisioner\" ","response":"range_response_count:1 size:3872"}
	{"level":"info","ts":"2024-01-16T03:44:42.519247Z","caller":"traceutil/trace.go:171","msg":"trace[1254868885] range","detail":"{range_begin:/registry/pods/kube-system/storage-provisioner; range_end:; response_count:1; response_revision:462; }","duration":"153.192536ms","start":"2024-01-16T03:44:42.366044Z","end":"2024-01-16T03:44:42.519236Z","steps":["trace[1254868885] 'agreement among raft nodes before linearized reading'  (duration: 152.328148ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T03:44:42.524201Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-16T03:44:42.167672Z","time spent":"356.52385ms","remote":"127.0.0.1:59174","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"info","ts":"2024-01-16T03:44:42.968729Z","caller":"traceutil/trace.go:171","msg":"trace[972308031] transaction","detail":"{read_only:false; response_revision:463; number_of_response:1; }","duration":"435.324229ms","start":"2024-01-16T03:44:42.533382Z","end":"2024-01-16T03:44:42.968706Z","steps":["trace[972308031] 'process raft request'  (duration: 435.030565ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T03:44:42.968935Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-16T03:44:42.533366Z","time spent":"435.496936ms","remote":"127.0.0.1:59260","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3997,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/deployments/kube-system/coredns\" mod_revision:458 > success:<request_put:<key:\"/registry/deployments/kube-system/coredns\" value_size:3948 >> failure:<request_range:<key:\"/registry/deployments/kube-system/coredns\" > >"}
	{"level":"info","ts":"2024-01-16T03:44:42.984357Z","caller":"traceutil/trace.go:171","msg":"trace[1061325130] transaction","detail":"{read_only:false; response_revision:465; number_of_response:1; }","duration":"442.374823ms","start":"2024-01-16T03:44:42.54196Z","end":"2024-01-16T03:44:42.984335Z","steps":["trace[1061325130] 'process raft request'  (duration: 441.379483ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T03:44:42.985346Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-16T03:44:42.541944Z","time spent":"443.107407ms","remote":"127.0.0.1:59200","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3667,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/storage-provisioner\" mod_revision:462 > success:<request_put:<key:\"/registry/pods/kube-system/storage-provisioner\" value_size:3613 >> failure:<request_range:<key:\"/registry/pods/kube-system/storage-provisioner\" > >"}
	{"level":"info","ts":"2024-01-16T03:44:42.986296Z","caller":"traceutil/trace.go:171","msg":"trace[230712695] transaction","detail":"{read_only:false; response_revision:464; number_of_response:1; }","duration":"452.527952ms","start":"2024-01-16T03:44:42.533752Z","end":"2024-01-16T03:44:42.98628Z","steps":["trace[230712695] 'process raft request'  (duration: 444.910163ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T03:44:42.986479Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-16T03:44:42.533738Z","time spent":"452.703717ms","remote":"127.0.0.1:59174","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":712,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/events/default/default-k8s-diff-port-434445.17aab70c0714930f\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/default-k8s-diff-port-434445.17aab70c0714930f\" value_size:624 lease:472470112372614539 >> failure:<>"}
	{"level":"info","ts":"2024-01-16T03:54:35.497747Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":836}
	{"level":"info","ts":"2024-01-16T03:54:35.501761Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":836,"took":"3.386623ms","hash":701153299}
	{"level":"info","ts":"2024-01-16T03:54:35.504794Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":701153299,"revision":836,"compact-revision":-1}
	
	
	==> kernel <==
	 03:58:07 up 14 min,  0 users,  load average: 0.28, 0.21, 0.19
	Linux default-k8s-diff-port-434445 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [f9861ff0fbab72660648bfcb49b82810bada99f73a55cb0aa359ba114825b8f3] <==
	I0116 03:54:37.683568       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0116 03:54:38.683793       1 handler_proxy.go:93] no RequestInfo found in the context
	E0116 03:54:38.683915       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0116 03:54:38.683926       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0116 03:54:38.684520       1 handler_proxy.go:93] no RequestInfo found in the context
	E0116 03:54:38.684592       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0116 03:54:38.685144       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0116 03:55:37.484588       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0116 03:55:38.684469       1 handler_proxy.go:93] no RequestInfo found in the context
	E0116 03:55:38.684746       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0116 03:55:38.684796       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0116 03:55:38.685545       1 handler_proxy.go:93] no RequestInfo found in the context
	E0116 03:55:38.685669       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0116 03:55:38.686986       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0116 03:56:37.484434       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0116 03:57:37.485010       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0116 03:57:38.685325       1 handler_proxy.go:93] no RequestInfo found in the context
	E0116 03:57:38.685458       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0116 03:57:38.685468       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0116 03:57:38.688261       1 handler_proxy.go:93] no RequestInfo found in the context
	E0116 03:57:38.688326       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0116 03:57:38.688334       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [1438a3832328a03b94c3d408a2e332c0fb0adab915a9d4f26994e84c0509fac9] <==
	I0116 03:52:20.837605       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 03:52:50.419771       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:52:50.847519       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 03:53:20.425653       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:53:20.857410       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 03:53:50.432837       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:53:50.870380       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 03:54:20.438742       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:54:20.881411       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 03:54:50.445423       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:54:50.890612       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 03:55:20.451942       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:55:20.900151       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 03:55:50.457659       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:55:50.911428       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0116 03:55:52.256795       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="343.432µs"
	I0116 03:56:06.261430       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="185.578µs"
	E0116 03:56:20.464651       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:56:20.928694       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 03:56:50.471967       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:56:50.940336       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 03:57:20.478838       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:57:20.949312       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 03:57:50.485638       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:57:50.957702       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [44f71a7069827d97c7326cbd36638ec36ff39cbbe0a1e4e61e7c010a38d2e123] <==
	I0116 03:44:39.911464       1 server_others.go:69] "Using iptables proxy"
	I0116 03:44:39.930615       1 node.go:141] Successfully retrieved node IP: 192.168.50.236
	I0116 03:44:39.998054       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0116 03:44:39.998176       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0116 03:44:40.001390       1 server_others.go:152] "Using iptables Proxier"
	I0116 03:44:40.001470       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0116 03:44:40.001738       1 server.go:846] "Version info" version="v1.28.4"
	I0116 03:44:40.001801       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0116 03:44:40.003432       1 config.go:188] "Starting service config controller"
	I0116 03:44:40.003496       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0116 03:44:40.003525       1 config.go:97] "Starting endpoint slice config controller"
	I0116 03:44:40.003531       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0116 03:44:40.004460       1 config.go:315] "Starting node config controller"
	I0116 03:44:40.004594       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0116 03:44:40.104217       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0116 03:44:40.104331       1 shared_informer.go:318] Caches are synced for service config
	I0116 03:44:40.104720       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [e60387e0e2800d67c99eb3566f31ad675c0db6c22379fc28ee9a447af5f5023c] <==
	I0116 03:44:35.275580       1 serving.go:348] Generated self-signed cert in-memory
	W0116 03:44:37.553754       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0116 03:44:37.553881       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0116 03:44:37.554028       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0116 03:44:37.554059       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0116 03:44:37.677921       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0116 03:44:37.678214       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0116 03:44:37.682808       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0116 03:44:37.682914       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0116 03:44:37.682948       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0116 03:44:37.682979       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0116 03:44:37.783825       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-01-16 03:44:00 UTC, ends at Tue 2024-01-16 03:58:07 UTC. --
	Jan 16 03:55:31 default-k8s-diff-port-434445 kubelet[939]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 16 03:55:31 default-k8s-diff-port-434445 kubelet[939]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 16 03:55:39 default-k8s-diff-port-434445 kubelet[939]: E0116 03:55:39.258676     939 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 16 03:55:39 default-k8s-diff-port-434445 kubelet[939]: E0116 03:55:39.258769     939 kuberuntime_image.go:53] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 16 03:55:39 default-k8s-diff-port-434445 kubelet[939]: E0116 03:55:39.259014     939 kuberuntime_manager.go:1261] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-qtgxf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:
&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessag
ePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-894n2_kube-system(46e4892a-d026-4a9d-88bc-128e92848808): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 16 03:55:39 default-k8s-diff-port-434445 kubelet[939]: E0116 03:55:39.259116     939 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-894n2" podUID="46e4892a-d026-4a9d-88bc-128e92848808"
	Jan 16 03:55:52 default-k8s-diff-port-434445 kubelet[939]: E0116 03:55:52.239294     939 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-894n2" podUID="46e4892a-d026-4a9d-88bc-128e92848808"
	Jan 16 03:56:06 default-k8s-diff-port-434445 kubelet[939]: E0116 03:56:06.239219     939 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-894n2" podUID="46e4892a-d026-4a9d-88bc-128e92848808"
	Jan 16 03:56:19 default-k8s-diff-port-434445 kubelet[939]: E0116 03:56:19.240941     939 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-894n2" podUID="46e4892a-d026-4a9d-88bc-128e92848808"
	Jan 16 03:56:31 default-k8s-diff-port-434445 kubelet[939]: E0116 03:56:31.277662     939 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 16 03:56:31 default-k8s-diff-port-434445 kubelet[939]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 16 03:56:31 default-k8s-diff-port-434445 kubelet[939]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 16 03:56:31 default-k8s-diff-port-434445 kubelet[939]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 16 03:56:33 default-k8s-diff-port-434445 kubelet[939]: E0116 03:56:33.239400     939 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-894n2" podUID="46e4892a-d026-4a9d-88bc-128e92848808"
	Jan 16 03:56:46 default-k8s-diff-port-434445 kubelet[939]: E0116 03:56:46.238622     939 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-894n2" podUID="46e4892a-d026-4a9d-88bc-128e92848808"
	Jan 16 03:57:00 default-k8s-diff-port-434445 kubelet[939]: E0116 03:57:00.239993     939 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-894n2" podUID="46e4892a-d026-4a9d-88bc-128e92848808"
	Jan 16 03:57:12 default-k8s-diff-port-434445 kubelet[939]: E0116 03:57:12.239498     939 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-894n2" podUID="46e4892a-d026-4a9d-88bc-128e92848808"
	Jan 16 03:57:24 default-k8s-diff-port-434445 kubelet[939]: E0116 03:57:24.239615     939 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-894n2" podUID="46e4892a-d026-4a9d-88bc-128e92848808"
	Jan 16 03:57:31 default-k8s-diff-port-434445 kubelet[939]: E0116 03:57:31.277155     939 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 16 03:57:31 default-k8s-diff-port-434445 kubelet[939]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 16 03:57:31 default-k8s-diff-port-434445 kubelet[939]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 16 03:57:31 default-k8s-diff-port-434445 kubelet[939]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 16 03:57:38 default-k8s-diff-port-434445 kubelet[939]: E0116 03:57:38.239695     939 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-894n2" podUID="46e4892a-d026-4a9d-88bc-128e92848808"
	Jan 16 03:57:50 default-k8s-diff-port-434445 kubelet[939]: E0116 03:57:50.238899     939 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-894n2" podUID="46e4892a-d026-4a9d-88bc-128e92848808"
	Jan 16 03:58:01 default-k8s-diff-port-434445 kubelet[939]: E0116 03:58:01.240228     939 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-894n2" podUID="46e4892a-d026-4a9d-88bc-128e92848808"
	
	
	==> storage-provisioner [33ba3a03d878abc86a194defd715bb61a0066e49063e3823002bd3ec421da138] <==
	I0116 03:44:41.298709       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0116 03:44:41.311840       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0116 03:44:41.312012       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0116 03:44:58.807224       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0116 03:44:58.807595       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1bb524f8-0322-4186-a5b5-937d8bcb583c", APIVersion:"v1", ResourceVersion:"590", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-434445_c7c2d6f2-5b1f-4148-aca5-112744344eb7 became leader
	I0116 03:44:58.808818       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-434445_c7c2d6f2-5b1f-4148-aca5-112744344eb7!
	I0116 03:44:58.910191       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-434445_c7c2d6f2-5b1f-4148-aca5-112744344eb7!
	
	
	==> storage-provisioner [a4b27881ef90cea325e8f306fac88a76052eec15a693c291288664b8f4ebcc2a] <==
	I0116 03:44:39.707649       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0116 03:44:39.754443       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-434445 -n default-k8s-diff-port-434445
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-434445 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-894n2
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-434445 describe pod metrics-server-57f55c9bc5-894n2
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-434445 describe pod metrics-server-57f55c9bc5-894n2: exit status 1 (90.840387ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-894n2" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-434445 describe pod metrics-server-57f55c9bc5-894n2: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (543.54s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (543.53s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-615980 -n embed-certs-615980
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-01-16 03:58:59.816400385 +0000 UTC m=+5083.754623005
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-615980 -n embed-certs-615980
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-615980 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-615980 logs -n 25: (1.814703043s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p no-preload-666547                                   | no-preload-666547            | jenkins | v1.32.0 | 16 Jan 24 03:33 UTC | 16 Jan 24 03:35 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| ssh     | cert-options-977008 ssh                                | cert-options-977008          | jenkins | v1.32.0 | 16 Jan 24 03:34 UTC | 16 Jan 24 03:34 UTC |
	|         | openssl x509 -text -noout -in                          |                              |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                              |         |         |                     |                     |
	| ssh     | -p cert-options-977008 -- sudo                         | cert-options-977008          | jenkins | v1.32.0 | 16 Jan 24 03:34 UTC | 16 Jan 24 03:34 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-977008                                 | cert-options-977008          | jenkins | v1.32.0 | 16 Jan 24 03:34 UTC | 16 Jan 24 03:34 UTC |
	| start   | -p embed-certs-615980                                  | embed-certs-615980           | jenkins | v1.32.0 | 16 Jan 24 03:34 UTC | 16 Jan 24 03:35 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-690771                              | cert-expiration-690771       | jenkins | v1.32.0 | 16 Jan 24 03:35 UTC | 16 Jan 24 03:36 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-615980            | embed-certs-615980           | jenkins | v1.32.0 | 16 Jan 24 03:35 UTC | 16 Jan 24 03:35 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-615980                                  | embed-certs-615980           | jenkins | v1.32.0 | 16 Jan 24 03:35 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-666547             | no-preload-666547            | jenkins | v1.32.0 | 16 Jan 24 03:35 UTC | 16 Jan 24 03:35 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-666547                                   | no-preload-666547            | jenkins | v1.32.0 | 16 Jan 24 03:35 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-696770        | old-k8s-version-696770       | jenkins | v1.32.0 | 16 Jan 24 03:36 UTC | 16 Jan 24 03:36 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-696770                              | old-k8s-version-696770       | jenkins | v1.32.0 | 16 Jan 24 03:36 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-690771                              | cert-expiration-690771       | jenkins | v1.32.0 | 16 Jan 24 03:36 UTC | 16 Jan 24 03:36 UTC |
	| delete  | -p                                                     | disable-driver-mounts-673948 | jenkins | v1.32.0 | 16 Jan 24 03:36 UTC | 16 Jan 24 03:36 UTC |
	|         | disable-driver-mounts-673948                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-434445 | jenkins | v1.32.0 | 16 Jan 24 03:36 UTC | 16 Jan 24 03:37 UTC |
	|         | default-k8s-diff-port-434445                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-434445  | default-k8s-diff-port-434445 | jenkins | v1.32.0 | 16 Jan 24 03:37 UTC | 16 Jan 24 03:37 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-434445 | jenkins | v1.32.0 | 16 Jan 24 03:37 UTC |                     |
	|         | default-k8s-diff-port-434445                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-615980                 | embed-certs-615980           | jenkins | v1.32.0 | 16 Jan 24 03:38 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-666547                  | no-preload-666547            | jenkins | v1.32.0 | 16 Jan 24 03:38 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-615980                                  | embed-certs-615980           | jenkins | v1.32.0 | 16 Jan 24 03:38 UTC | 16 Jan 24 03:49 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p no-preload-666547                                   | no-preload-666547            | jenkins | v1.32.0 | 16 Jan 24 03:38 UTC | 16 Jan 24 03:48 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-696770             | old-k8s-version-696770       | jenkins | v1.32.0 | 16 Jan 24 03:38 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-696770                              | old-k8s-version-696770       | jenkins | v1.32.0 | 16 Jan 24 03:38 UTC | 16 Jan 24 03:51 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-434445       | default-k8s-diff-port-434445 | jenkins | v1.32.0 | 16 Jan 24 03:40 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-434445 | jenkins | v1.32.0 | 16 Jan 24 03:40 UTC | 16 Jan 24 03:49 UTC |
	|         | default-k8s-diff-port-434445                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/16 03:40:16
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0116 03:40:16.605622  507889 out.go:296] Setting OutFile to fd 1 ...
	I0116 03:40:16.605883  507889 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:40:16.605892  507889 out.go:309] Setting ErrFile to fd 2...
	I0116 03:40:16.605897  507889 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:40:16.606102  507889 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17965-468241/.minikube/bin
	I0116 03:40:16.606721  507889 out.go:303] Setting JSON to false
	I0116 03:40:16.607781  507889 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":15769,"bootTime":1705360648,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0116 03:40:16.607865  507889 start.go:138] virtualization: kvm guest
	I0116 03:40:16.610269  507889 out.go:177] * [default-k8s-diff-port-434445] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0116 03:40:16.611862  507889 out.go:177]   - MINIKUBE_LOCATION=17965
	I0116 03:40:16.611954  507889 notify.go:220] Checking for updates...
	I0116 03:40:16.613306  507889 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 03:40:16.615094  507889 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17965-468241/kubeconfig
	I0116 03:40:16.617044  507889 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17965-468241/.minikube
	I0116 03:40:16.618932  507889 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0116 03:40:16.621159  507889 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0116 03:40:16.623616  507889 config.go:182] Loaded profile config "default-k8s-diff-port-434445": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 03:40:16.624273  507889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:40:16.624363  507889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:40:16.640065  507889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45633
	I0116 03:40:16.640642  507889 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:40:16.641273  507889 main.go:141] libmachine: Using API Version  1
	I0116 03:40:16.641301  507889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:40:16.641696  507889 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:40:16.641901  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .DriverName
	I0116 03:40:16.642227  507889 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 03:40:16.642599  507889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:40:16.642684  507889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:40:16.658198  507889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41877
	I0116 03:40:16.658657  507889 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:40:16.659207  507889 main.go:141] libmachine: Using API Version  1
	I0116 03:40:16.659233  507889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:40:16.659588  507889 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:40:16.659844  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .DriverName
	I0116 03:40:16.698770  507889 out.go:177] * Using the kvm2 driver based on existing profile
	I0116 03:40:16.700307  507889 start.go:298] selected driver: kvm2
	I0116 03:40:16.700330  507889 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-434445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-434445 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.50.236 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 03:40:16.700478  507889 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0116 03:40:16.701296  507889 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 03:40:16.701389  507889 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17965-468241/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0116 03:40:16.717988  507889 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0116 03:40:16.718426  507889 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0116 03:40:16.718515  507889 cni.go:84] Creating CNI manager for ""
	I0116 03:40:16.718532  507889 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:40:16.718547  507889 start_flags.go:321] config:
	{Name:default-k8s-diff-port-434445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-43444
5 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.50.236 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 03:40:16.718765  507889 iso.go:125] acquiring lock: {Name:mk2f3231f0eeeb23816ac363851489181398d1c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 03:40:16.721292  507889 out.go:177] * Starting control plane node default-k8s-diff-port-434445 in cluster default-k8s-diff-port-434445
	I0116 03:40:16.722858  507889 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 03:40:16.722928  507889 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0116 03:40:16.722942  507889 cache.go:56] Caching tarball of preloaded images
	I0116 03:40:16.723044  507889 preload.go:174] Found /home/jenkins/minikube-integration/17965-468241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0116 03:40:16.723057  507889 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0116 03:40:16.723243  507889 profile.go:148] Saving config to /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/default-k8s-diff-port-434445/config.json ...
	I0116 03:40:16.723502  507889 start.go:365] acquiring machines lock for default-k8s-diff-port-434445: {Name:mk901e5fae8c1a578d989b520053c709bdbdcf06 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0116 03:40:22.140399  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:40:25.212385  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:40:31.292386  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:40:34.364375  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:40:40.444398  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:40:43.516372  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:40:49.596388  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:40:52.668394  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:40:58.748342  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:41:01.820436  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:41:07.900338  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:41:10.972410  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:41:17.052384  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:41:20.124427  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:41:26.204371  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:41:29.276361  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:41:35.356391  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:41:38.428383  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:41:44.508324  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:41:47.580377  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:41:53.660360  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:41:56.732377  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:42:02.812345  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:42:05.884406  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:42:11.964398  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:42:15.036469  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:42:21.116391  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:42:24.188397  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:42:30.268400  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:42:33.340416  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:42:39.420405  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:42:42.492396  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:42:48.572396  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:42:51.644367  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:42:57.724419  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:43:00.796427  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:43:03.800669  507339 start.go:369] acquired machines lock for "no-preload-666547" in 4m33.073406767s
	I0116 03:43:03.800732  507339 start.go:96] Skipping create...Using existing machine configuration
	I0116 03:43:03.800744  507339 fix.go:54] fixHost starting: 
	I0116 03:43:03.801330  507339 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:43:03.801381  507339 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:43:03.817309  507339 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46189
	I0116 03:43:03.817841  507339 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:43:03.818376  507339 main.go:141] libmachine: Using API Version  1
	I0116 03:43:03.818403  507339 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:43:03.818801  507339 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:43:03.819049  507339 main.go:141] libmachine: (no-preload-666547) Calling .DriverName
	I0116 03:43:03.819206  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetState
	I0116 03:43:03.821006  507339 fix.go:102] recreateIfNeeded on no-preload-666547: state=Stopped err=<nil>
	I0116 03:43:03.821031  507339 main.go:141] libmachine: (no-preload-666547) Calling .DriverName
	W0116 03:43:03.821210  507339 fix.go:128] unexpected machine state, will restart: <nil>
	I0116 03:43:03.823341  507339 out.go:177] * Restarting existing kvm2 VM for "no-preload-666547" ...
	I0116 03:43:03.824887  507339 main.go:141] libmachine: (no-preload-666547) Calling .Start
	I0116 03:43:03.825070  507339 main.go:141] libmachine: (no-preload-666547) Ensuring networks are active...
	I0116 03:43:03.825806  507339 main.go:141] libmachine: (no-preload-666547) Ensuring network default is active
	I0116 03:43:03.826151  507339 main.go:141] libmachine: (no-preload-666547) Ensuring network mk-no-preload-666547 is active
	I0116 03:43:03.826549  507339 main.go:141] libmachine: (no-preload-666547) Getting domain xml...
	I0116 03:43:03.827209  507339 main.go:141] libmachine: (no-preload-666547) Creating domain...
	I0116 03:43:04.166757  507339 main.go:141] libmachine: (no-preload-666547) Waiting to get IP...
	I0116 03:43:04.167846  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:04.168294  507339 main.go:141] libmachine: (no-preload-666547) DBG | unable to find current IP address of domain no-preload-666547 in network mk-no-preload-666547
	I0116 03:43:04.168400  507339 main.go:141] libmachine: (no-preload-666547) DBG | I0116 03:43:04.168281  508330 retry.go:31] will retry after 236.684347ms: waiting for machine to come up
	I0116 03:43:04.407068  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:04.407590  507339 main.go:141] libmachine: (no-preload-666547) DBG | unable to find current IP address of domain no-preload-666547 in network mk-no-preload-666547
	I0116 03:43:04.407626  507339 main.go:141] libmachine: (no-preload-666547) DBG | I0116 03:43:04.407520  508330 retry.go:31] will retry after 273.512454ms: waiting for machine to come up
	I0116 03:43:04.683173  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:04.683724  507339 main.go:141] libmachine: (no-preload-666547) DBG | unable to find current IP address of domain no-preload-666547 in network mk-no-preload-666547
	I0116 03:43:04.683759  507339 main.go:141] libmachine: (no-preload-666547) DBG | I0116 03:43:04.683652  508330 retry.go:31] will retry after 404.396132ms: waiting for machine to come up
	I0116 03:43:05.089306  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:05.089659  507339 main.go:141] libmachine: (no-preload-666547) DBG | unable to find current IP address of domain no-preload-666547 in network mk-no-preload-666547
	I0116 03:43:05.089687  507339 main.go:141] libmachine: (no-preload-666547) DBG | I0116 03:43:05.089612  508330 retry.go:31] will retry after 373.291662ms: waiting for machine to come up
	I0116 03:43:05.464216  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:05.464745  507339 main.go:141] libmachine: (no-preload-666547) DBG | unable to find current IP address of domain no-preload-666547 in network mk-no-preload-666547
	I0116 03:43:05.464772  507339 main.go:141] libmachine: (no-preload-666547) DBG | I0116 03:43:05.464696  508330 retry.go:31] will retry after 509.048348ms: waiting for machine to come up
	I0116 03:43:03.798483  507257 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 03:43:03.798553  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHHostname
	I0116 03:43:03.800507  507257 machine.go:91] provisioned docker machine in 4m37.39429533s
	I0116 03:43:03.800559  507257 fix.go:56] fixHost completed within 4m37.41769564s
	I0116 03:43:03.800568  507257 start.go:83] releasing machines lock for "embed-certs-615980", held for 4m37.417718822s
	W0116 03:43:03.800599  507257 start.go:694] error starting host: provision: host is not running
	W0116 03:43:03.800747  507257 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0116 03:43:03.800759  507257 start.go:709] Will try again in 5 seconds ...
	I0116 03:43:05.975369  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:05.975831  507339 main.go:141] libmachine: (no-preload-666547) DBG | unable to find current IP address of domain no-preload-666547 in network mk-no-preload-666547
	I0116 03:43:05.975864  507339 main.go:141] libmachine: (no-preload-666547) DBG | I0116 03:43:05.975776  508330 retry.go:31] will retry after 631.077965ms: waiting for machine to come up
	I0116 03:43:06.608722  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:06.609133  507339 main.go:141] libmachine: (no-preload-666547) DBG | unable to find current IP address of domain no-preload-666547 in network mk-no-preload-666547
	I0116 03:43:06.609162  507339 main.go:141] libmachine: (no-preload-666547) DBG | I0116 03:43:06.609074  508330 retry.go:31] will retry after 1.047586363s: waiting for machine to come up
	I0116 03:43:07.658264  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:07.658645  507339 main.go:141] libmachine: (no-preload-666547) DBG | unable to find current IP address of domain no-preload-666547 in network mk-no-preload-666547
	I0116 03:43:07.658696  507339 main.go:141] libmachine: (no-preload-666547) DBG | I0116 03:43:07.658591  508330 retry.go:31] will retry after 1.038644854s: waiting for machine to come up
	I0116 03:43:08.698946  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:08.699384  507339 main.go:141] libmachine: (no-preload-666547) DBG | unable to find current IP address of domain no-preload-666547 in network mk-no-preload-666547
	I0116 03:43:08.699411  507339 main.go:141] libmachine: (no-preload-666547) DBG | I0116 03:43:08.699347  508330 retry.go:31] will retry after 1.362032973s: waiting for machine to come up
	I0116 03:43:10.063269  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:10.063764  507339 main.go:141] libmachine: (no-preload-666547) DBG | unable to find current IP address of domain no-preload-666547 in network mk-no-preload-666547
	I0116 03:43:10.063792  507339 main.go:141] libmachine: (no-preload-666547) DBG | I0116 03:43:10.063714  508330 retry.go:31] will retry after 1.432317286s: waiting for machine to come up
	I0116 03:43:08.802803  507257 start.go:365] acquiring machines lock for embed-certs-615980: {Name:mk901e5fae8c1a578d989b520053c709bdbdcf06 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0116 03:43:11.498235  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:11.498714  507339 main.go:141] libmachine: (no-preload-666547) DBG | unable to find current IP address of domain no-preload-666547 in network mk-no-preload-666547
	I0116 03:43:11.498748  507339 main.go:141] libmachine: (no-preload-666547) DBG | I0116 03:43:11.498650  508330 retry.go:31] will retry after 2.490630326s: waiting for machine to come up
	I0116 03:43:13.991256  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:13.991717  507339 main.go:141] libmachine: (no-preload-666547) DBG | unable to find current IP address of domain no-preload-666547 in network mk-no-preload-666547
	I0116 03:43:13.991752  507339 main.go:141] libmachine: (no-preload-666547) DBG | I0116 03:43:13.991662  508330 retry.go:31] will retry after 3.569049736s: waiting for machine to come up
	I0116 03:43:17.565524  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:17.565893  507339 main.go:141] libmachine: (no-preload-666547) DBG | unable to find current IP address of domain no-preload-666547 in network mk-no-preload-666547
	I0116 03:43:17.565916  507339 main.go:141] libmachine: (no-preload-666547) DBG | I0116 03:43:17.565850  508330 retry.go:31] will retry after 2.875259098s: waiting for machine to come up
	I0116 03:43:20.443998  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:20.444472  507339 main.go:141] libmachine: (no-preload-666547) DBG | unable to find current IP address of domain no-preload-666547 in network mk-no-preload-666547
	I0116 03:43:20.444495  507339 main.go:141] libmachine: (no-preload-666547) DBG | I0116 03:43:20.444438  508330 retry.go:31] will retry after 4.319647454s: waiting for machine to come up
	I0116 03:43:24.765311  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:24.765836  507339 main.go:141] libmachine: (no-preload-666547) Found IP for machine: 192.168.39.103
	I0116 03:43:24.765862  507339 main.go:141] libmachine: (no-preload-666547) Reserving static IP address...
	I0116 03:43:24.765879  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has current primary IP address 192.168.39.103 and MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:24.766413  507339 main.go:141] libmachine: (no-preload-666547) DBG | found host DHCP lease matching {name: "no-preload-666547", mac: "52:54:00:4e:5f:03", ip: "192.168.39.103"} in network mk-no-preload-666547: {Iface:virbr2 ExpiryTime:2024-01-16 04:43:15 +0000 UTC Type:0 Mac:52:54:00:4e:5f:03 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:no-preload-666547 Clientid:01:52:54:00:4e:5f:03}
	I0116 03:43:24.766543  507339 main.go:141] libmachine: (no-preload-666547) Reserved static IP address: 192.168.39.103
	I0116 03:43:24.766575  507339 main.go:141] libmachine: (no-preload-666547) DBG | skip adding static IP to network mk-no-preload-666547 - found existing host DHCP lease matching {name: "no-preload-666547", mac: "52:54:00:4e:5f:03", ip: "192.168.39.103"}
	I0116 03:43:24.766593  507339 main.go:141] libmachine: (no-preload-666547) DBG | Getting to WaitForSSH function...
	I0116 03:43:24.766607  507339 main.go:141] libmachine: (no-preload-666547) Waiting for SSH to be available...
	I0116 03:43:24.768801  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:24.769145  507339 main.go:141] libmachine: (no-preload-666547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:5f:03", ip: ""} in network mk-no-preload-666547: {Iface:virbr2 ExpiryTime:2024-01-16 04:43:15 +0000 UTC Type:0 Mac:52:54:00:4e:5f:03 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:no-preload-666547 Clientid:01:52:54:00:4e:5f:03}
	I0116 03:43:24.769180  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined IP address 192.168.39.103 and MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:24.769392  507339 main.go:141] libmachine: (no-preload-666547) DBG | Using SSH client type: external
	I0116 03:43:24.769446  507339 main.go:141] libmachine: (no-preload-666547) DBG | Using SSH private key: /home/jenkins/minikube-integration/17965-468241/.minikube/machines/no-preload-666547/id_rsa (-rw-------)
	I0116 03:43:24.769490  507339 main.go:141] libmachine: (no-preload-666547) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.103 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17965-468241/.minikube/machines/no-preload-666547/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0116 03:43:24.769539  507339 main.go:141] libmachine: (no-preload-666547) DBG | About to run SSH command:
	I0116 03:43:24.769557  507339 main.go:141] libmachine: (no-preload-666547) DBG | exit 0
	I0116 03:43:24.860928  507339 main.go:141] libmachine: (no-preload-666547) DBG | SSH cmd err, output: <nil>: 
	I0116 03:43:24.861324  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetConfigRaw
	I0116 03:43:24.862217  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetIP
	I0116 03:43:24.865100  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:24.865468  507339 main.go:141] libmachine: (no-preload-666547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:5f:03", ip: ""} in network mk-no-preload-666547: {Iface:virbr2 ExpiryTime:2024-01-16 04:43:15 +0000 UTC Type:0 Mac:52:54:00:4e:5f:03 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:no-preload-666547 Clientid:01:52:54:00:4e:5f:03}
	I0116 03:43:24.865503  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined IP address 192.168.39.103 and MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:24.865804  507339 profile.go:148] Saving config to /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/no-preload-666547/config.json ...
	I0116 03:43:24.866064  507339 machine.go:88] provisioning docker machine ...
	I0116 03:43:24.866091  507339 main.go:141] libmachine: (no-preload-666547) Calling .DriverName
	I0116 03:43:24.866374  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetMachineName
	I0116 03:43:24.866590  507339 buildroot.go:166] provisioning hostname "no-preload-666547"
	I0116 03:43:24.866613  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetMachineName
	I0116 03:43:24.866795  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHHostname
	I0116 03:43:24.869231  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:24.869587  507339 main.go:141] libmachine: (no-preload-666547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:5f:03", ip: ""} in network mk-no-preload-666547: {Iface:virbr2 ExpiryTime:2024-01-16 04:43:15 +0000 UTC Type:0 Mac:52:54:00:4e:5f:03 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:no-preload-666547 Clientid:01:52:54:00:4e:5f:03}
	I0116 03:43:24.869623  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined IP address 192.168.39.103 and MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:24.869778  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHPort
	I0116 03:43:24.870002  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHKeyPath
	I0116 03:43:24.870168  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHKeyPath
	I0116 03:43:24.870304  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHUsername
	I0116 03:43:24.870455  507339 main.go:141] libmachine: Using SSH client type: native
	I0116 03:43:24.870929  507339 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I0116 03:43:24.870949  507339 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-666547 && echo "no-preload-666547" | sudo tee /etc/hostname
	I0116 03:43:25.005390  507339 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-666547
	
	I0116 03:43:25.005425  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHHostname
	I0116 03:43:25.008410  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:25.008801  507339 main.go:141] libmachine: (no-preload-666547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:5f:03", ip: ""} in network mk-no-preload-666547: {Iface:virbr2 ExpiryTime:2024-01-16 04:43:15 +0000 UTC Type:0 Mac:52:54:00:4e:5f:03 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:no-preload-666547 Clientid:01:52:54:00:4e:5f:03}
	I0116 03:43:25.008836  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined IP address 192.168.39.103 and MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:25.009007  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHPort
	I0116 03:43:25.009269  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHKeyPath
	I0116 03:43:25.009432  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHKeyPath
	I0116 03:43:25.009561  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHUsername
	I0116 03:43:25.009722  507339 main.go:141] libmachine: Using SSH client type: native
	I0116 03:43:25.010051  507339 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I0116 03:43:25.010071  507339 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-666547' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-666547/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-666547' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 03:43:25.142889  507339 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 03:43:25.142928  507339 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17965-468241/.minikube CaCertPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17965-468241/.minikube}
	I0116 03:43:25.142950  507339 buildroot.go:174] setting up certificates
	I0116 03:43:25.142963  507339 provision.go:83] configureAuth start
	I0116 03:43:25.142973  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetMachineName
	I0116 03:43:25.143294  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetIP
	I0116 03:43:25.146355  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:25.146746  507339 main.go:141] libmachine: (no-preload-666547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:5f:03", ip: ""} in network mk-no-preload-666547: {Iface:virbr2 ExpiryTime:2024-01-16 04:43:15 +0000 UTC Type:0 Mac:52:54:00:4e:5f:03 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:no-preload-666547 Clientid:01:52:54:00:4e:5f:03}
	I0116 03:43:25.146767  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined IP address 192.168.39.103 and MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:25.147063  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHHostname
	I0116 03:43:25.149867  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:25.150231  507339 main.go:141] libmachine: (no-preload-666547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:5f:03", ip: ""} in network mk-no-preload-666547: {Iface:virbr2 ExpiryTime:2024-01-16 04:43:15 +0000 UTC Type:0 Mac:52:54:00:4e:5f:03 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:no-preload-666547 Clientid:01:52:54:00:4e:5f:03}
	I0116 03:43:25.150260  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined IP address 192.168.39.103 and MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:25.150448  507339 provision.go:138] copyHostCerts
	I0116 03:43:25.150531  507339 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem, removing ...
	I0116 03:43:25.150543  507339 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem
	I0116 03:43:25.150618  507339 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem (1123 bytes)
	I0116 03:43:25.150719  507339 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem, removing ...
	I0116 03:43:25.150729  507339 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem
	I0116 03:43:25.150755  507339 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem (1679 bytes)
	I0116 03:43:25.150815  507339 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem, removing ...
	I0116 03:43:25.150823  507339 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem
	I0116 03:43:25.150843  507339 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem (1078 bytes)
	I0116 03:43:25.150888  507339 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca-key.pem org=jenkins.no-preload-666547 san=[192.168.39.103 192.168.39.103 localhost 127.0.0.1 minikube no-preload-666547]
	I0116 03:43:25.417982  507339 provision.go:172] copyRemoteCerts
	I0116 03:43:25.418059  507339 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 03:43:25.418088  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHHostname
	I0116 03:43:25.420908  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:25.421196  507339 main.go:141] libmachine: (no-preload-666547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:5f:03", ip: ""} in network mk-no-preload-666547: {Iface:virbr2 ExpiryTime:2024-01-16 04:43:15 +0000 UTC Type:0 Mac:52:54:00:4e:5f:03 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:no-preload-666547 Clientid:01:52:54:00:4e:5f:03}
	I0116 03:43:25.421235  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined IP address 192.168.39.103 and MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:25.421372  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHPort
	I0116 03:43:25.421609  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHKeyPath
	I0116 03:43:25.421782  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHUsername
	I0116 03:43:25.421952  507339 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/no-preload-666547/id_rsa Username:docker}
	I0116 03:43:25.509876  507339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0116 03:43:25.534885  507339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0116 03:43:25.562593  507339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0116 03:43:25.590106  507339 provision.go:86] duration metric: configureAuth took 447.124389ms
	I0116 03:43:25.590145  507339 buildroot.go:189] setting minikube options for container-runtime
	I0116 03:43:25.590386  507339 config.go:182] Loaded profile config "no-preload-666547": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0116 03:43:25.590475  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHHostname
	I0116 03:43:25.593695  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:25.594125  507339 main.go:141] libmachine: (no-preload-666547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:5f:03", ip: ""} in network mk-no-preload-666547: {Iface:virbr2 ExpiryTime:2024-01-16 04:43:15 +0000 UTC Type:0 Mac:52:54:00:4e:5f:03 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:no-preload-666547 Clientid:01:52:54:00:4e:5f:03}
	I0116 03:43:25.594182  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined IP address 192.168.39.103 and MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:25.594407  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHPort
	I0116 03:43:25.594661  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHKeyPath
	I0116 03:43:25.594851  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHKeyPath
	I0116 03:43:25.595124  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHUsername
	I0116 03:43:25.595362  507339 main.go:141] libmachine: Using SSH client type: native
	I0116 03:43:25.595735  507339 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I0116 03:43:25.595753  507339 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 03:43:26.177541  507510 start.go:369] acquired machines lock for "old-k8s-version-696770" in 4m36.503560035s
	I0116 03:43:26.177612  507510 start.go:96] Skipping create...Using existing machine configuration
	I0116 03:43:26.177621  507510 fix.go:54] fixHost starting: 
	I0116 03:43:26.178073  507510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:43:26.178115  507510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:43:26.194930  507510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35119
	I0116 03:43:26.195420  507510 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:43:26.195898  507510 main.go:141] libmachine: Using API Version  1
	I0116 03:43:26.195925  507510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:43:26.196303  507510 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:43:26.196517  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .DriverName
	I0116 03:43:26.196797  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetState
	I0116 03:43:26.198728  507510 fix.go:102] recreateIfNeeded on old-k8s-version-696770: state=Stopped err=<nil>
	I0116 03:43:26.198759  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .DriverName
	W0116 03:43:26.198959  507510 fix.go:128] unexpected machine state, will restart: <nil>
	I0116 03:43:26.201929  507510 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-696770" ...
	I0116 03:43:25.916953  507339 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 03:43:25.916987  507339 machine.go:91] provisioned docker machine in 1.05090319s
	I0116 03:43:25.917013  507339 start.go:300] post-start starting for "no-preload-666547" (driver="kvm2")
	I0116 03:43:25.917045  507339 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 03:43:25.917070  507339 main.go:141] libmachine: (no-preload-666547) Calling .DriverName
	I0116 03:43:25.917472  507339 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 03:43:25.917510  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHHostname
	I0116 03:43:25.920700  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:25.921097  507339 main.go:141] libmachine: (no-preload-666547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:5f:03", ip: ""} in network mk-no-preload-666547: {Iface:virbr2 ExpiryTime:2024-01-16 04:43:15 +0000 UTC Type:0 Mac:52:54:00:4e:5f:03 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:no-preload-666547 Clientid:01:52:54:00:4e:5f:03}
	I0116 03:43:25.921132  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined IP address 192.168.39.103 and MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:25.921386  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHPort
	I0116 03:43:25.921663  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHKeyPath
	I0116 03:43:25.921877  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHUsername
	I0116 03:43:25.922086  507339 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/no-preload-666547/id_rsa Username:docker}
	I0116 03:43:26.011987  507339 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 03:43:26.016777  507339 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 03:43:26.016813  507339 filesync.go:126] Scanning /home/jenkins/minikube-integration/17965-468241/.minikube/addons for local assets ...
	I0116 03:43:26.016889  507339 filesync.go:126] Scanning /home/jenkins/minikube-integration/17965-468241/.minikube/files for local assets ...
	I0116 03:43:26.016985  507339 filesync.go:149] local asset: /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem -> 4754782.pem in /etc/ssl/certs
	I0116 03:43:26.017109  507339 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 03:43:26.027303  507339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem --> /etc/ssl/certs/4754782.pem (1708 bytes)
	I0116 03:43:26.051806  507339 start.go:303] post-start completed in 134.758948ms
	I0116 03:43:26.051849  507339 fix.go:56] fixHost completed within 22.25110408s
	I0116 03:43:26.051881  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHHostname
	I0116 03:43:26.055165  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:26.055568  507339 main.go:141] libmachine: (no-preload-666547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:5f:03", ip: ""} in network mk-no-preload-666547: {Iface:virbr2 ExpiryTime:2024-01-16 04:43:15 +0000 UTC Type:0 Mac:52:54:00:4e:5f:03 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:no-preload-666547 Clientid:01:52:54:00:4e:5f:03}
	I0116 03:43:26.055605  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined IP address 192.168.39.103 and MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:26.055763  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHPort
	I0116 03:43:26.055983  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHKeyPath
	I0116 03:43:26.056222  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHKeyPath
	I0116 03:43:26.056407  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHUsername
	I0116 03:43:26.056579  507339 main.go:141] libmachine: Using SSH client type: native
	I0116 03:43:26.056930  507339 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I0116 03:43:26.056948  507339 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0116 03:43:26.177329  507339 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705376606.122912048
	
	I0116 03:43:26.177360  507339 fix.go:206] guest clock: 1705376606.122912048
	I0116 03:43:26.177367  507339 fix.go:219] Guest: 2024-01-16 03:43:26.122912048 +0000 UTC Remote: 2024-01-16 03:43:26.051855053 +0000 UTC m=+295.486361610 (delta=71.056995ms)
	I0116 03:43:26.177424  507339 fix.go:190] guest clock delta is within tolerance: 71.056995ms
	I0116 03:43:26.177430  507339 start.go:83] releasing machines lock for "no-preload-666547", held for 22.376720152s
	I0116 03:43:26.177461  507339 main.go:141] libmachine: (no-preload-666547) Calling .DriverName
	I0116 03:43:26.177761  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetIP
	I0116 03:43:26.180783  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:26.181087  507339 main.go:141] libmachine: (no-preload-666547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:5f:03", ip: ""} in network mk-no-preload-666547: {Iface:virbr2 ExpiryTime:2024-01-16 04:43:15 +0000 UTC Type:0 Mac:52:54:00:4e:5f:03 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:no-preload-666547 Clientid:01:52:54:00:4e:5f:03}
	I0116 03:43:26.181117  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined IP address 192.168.39.103 and MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:26.181281  507339 main.go:141] libmachine: (no-preload-666547) Calling .DriverName
	I0116 03:43:26.181876  507339 main.go:141] libmachine: (no-preload-666547) Calling .DriverName
	I0116 03:43:26.182068  507339 main.go:141] libmachine: (no-preload-666547) Calling .DriverName
	I0116 03:43:26.182154  507339 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 03:43:26.182203  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHHostname
	I0116 03:43:26.182337  507339 ssh_runner.go:195] Run: cat /version.json
	I0116 03:43:26.182366  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHHostname
	I0116 03:43:26.185253  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:26.185403  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:26.185625  507339 main.go:141] libmachine: (no-preload-666547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:5f:03", ip: ""} in network mk-no-preload-666547: {Iface:virbr2 ExpiryTime:2024-01-16 04:43:15 +0000 UTC Type:0 Mac:52:54:00:4e:5f:03 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:no-preload-666547 Clientid:01:52:54:00:4e:5f:03}
	I0116 03:43:26.185655  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined IP address 192.168.39.103 and MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:26.185807  507339 main.go:141] libmachine: (no-preload-666547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:5f:03", ip: ""} in network mk-no-preload-666547: {Iface:virbr2 ExpiryTime:2024-01-16 04:43:15 +0000 UTC Type:0 Mac:52:54:00:4e:5f:03 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:no-preload-666547 Clientid:01:52:54:00:4e:5f:03}
	I0116 03:43:26.185816  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHPort
	I0116 03:43:26.185855  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined IP address 192.168.39.103 and MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:26.185966  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHPort
	I0116 03:43:26.186041  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHKeyPath
	I0116 03:43:26.186137  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHKeyPath
	I0116 03:43:26.186220  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHUsername
	I0116 03:43:26.186306  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHUsername
	I0116 03:43:26.186383  507339 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/no-preload-666547/id_rsa Username:docker}
	I0116 03:43:26.186428  507339 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/no-preload-666547/id_rsa Username:docker}
	I0116 03:43:26.312441  507339 ssh_runner.go:195] Run: systemctl --version
	I0116 03:43:26.319016  507339 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 03:43:26.469427  507339 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0116 03:43:26.475759  507339 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 03:43:26.475896  507339 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 03:43:26.491920  507339 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0116 03:43:26.491952  507339 start.go:475] detecting cgroup driver to use...
	I0116 03:43:26.492112  507339 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 03:43:26.508122  507339 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 03:43:26.523664  507339 docker.go:217] disabling cri-docker service (if available) ...
	I0116 03:43:26.523754  507339 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 03:43:26.540173  507339 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 03:43:26.557370  507339 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 03:43:26.685134  507339 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 03:43:26.806555  507339 docker.go:233] disabling docker service ...
	I0116 03:43:26.806640  507339 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 03:43:26.821910  507339 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 03:43:26.836619  507339 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 03:43:26.950601  507339 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 03:43:27.077586  507339 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 03:43:27.091892  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 03:43:27.111772  507339 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0116 03:43:27.111856  507339 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:43:27.122183  507339 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 03:43:27.122261  507339 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:43:27.132861  507339 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:43:27.144003  507339 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:43:27.154747  507339 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 03:43:27.166236  507339 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 03:43:27.175337  507339 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0116 03:43:27.175410  507339 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0116 03:43:27.190891  507339 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 03:43:27.201216  507339 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 03:43:27.322701  507339 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 03:43:27.504197  507339 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 03:43:27.504292  507339 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 03:43:27.509879  507339 start.go:543] Will wait 60s for crictl version
	I0116 03:43:27.509972  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:43:27.514555  507339 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 03:43:27.556338  507339 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0116 03:43:27.556444  507339 ssh_runner.go:195] Run: crio --version
	I0116 03:43:27.615814  507339 ssh_runner.go:195] Run: crio --version
	I0116 03:43:27.666262  507339 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I0116 03:43:26.203694  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .Start
	I0116 03:43:26.203950  507510 main.go:141] libmachine: (old-k8s-version-696770) Ensuring networks are active...
	I0116 03:43:26.204831  507510 main.go:141] libmachine: (old-k8s-version-696770) Ensuring network default is active
	I0116 03:43:26.205251  507510 main.go:141] libmachine: (old-k8s-version-696770) Ensuring network mk-old-k8s-version-696770 is active
	I0116 03:43:26.205763  507510 main.go:141] libmachine: (old-k8s-version-696770) Getting domain xml...
	I0116 03:43:26.206485  507510 main.go:141] libmachine: (old-k8s-version-696770) Creating domain...
	I0116 03:43:26.558284  507510 main.go:141] libmachine: (old-k8s-version-696770) Waiting to get IP...
	I0116 03:43:26.559270  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:26.559701  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | unable to find current IP address of domain old-k8s-version-696770 in network mk-old-k8s-version-696770
	I0116 03:43:26.559793  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | I0116 03:43:26.559692  508427 retry.go:31] will retry after 243.799089ms: waiting for machine to come up
	I0116 03:43:26.805411  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:26.805914  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | unable to find current IP address of domain old-k8s-version-696770 in network mk-old-k8s-version-696770
	I0116 03:43:26.805948  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | I0116 03:43:26.805846  508427 retry.go:31] will retry after 346.727587ms: waiting for machine to come up
	I0116 03:43:27.154528  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:27.155074  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | unable to find current IP address of domain old-k8s-version-696770 in network mk-old-k8s-version-696770
	I0116 03:43:27.155107  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | I0116 03:43:27.155023  508427 retry.go:31] will retry after 357.633471ms: waiting for machine to come up
	I0116 03:43:27.514870  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:27.515490  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | unable to find current IP address of domain old-k8s-version-696770 in network mk-old-k8s-version-696770
	I0116 03:43:27.515517  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | I0116 03:43:27.515452  508427 retry.go:31] will retry after 582.001218ms: waiting for machine to come up
	I0116 03:43:28.099271  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:28.099783  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | unable to find current IP address of domain old-k8s-version-696770 in network mk-old-k8s-version-696770
	I0116 03:43:28.099817  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | I0116 03:43:28.099735  508427 retry.go:31] will retry after 747.661188ms: waiting for machine to come up
	I0116 03:43:28.849318  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:28.849836  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | unable to find current IP address of domain old-k8s-version-696770 in network mk-old-k8s-version-696770
	I0116 03:43:28.849872  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | I0116 03:43:28.849799  508427 retry.go:31] will retry after 953.610286ms: waiting for machine to come up
	I0116 03:43:27.667889  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetIP
	I0116 03:43:27.671385  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:27.671804  507339 main.go:141] libmachine: (no-preload-666547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:5f:03", ip: ""} in network mk-no-preload-666547: {Iface:virbr2 ExpiryTime:2024-01-16 04:43:15 +0000 UTC Type:0 Mac:52:54:00:4e:5f:03 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:no-preload-666547 Clientid:01:52:54:00:4e:5f:03}
	I0116 03:43:27.671840  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined IP address 192.168.39.103 and MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:27.672113  507339 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0116 03:43:27.676693  507339 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 03:43:27.690701  507339 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0116 03:43:27.690748  507339 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 03:43:27.731189  507339 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0116 03:43:27.731219  507339 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0116 03:43:27.731321  507339 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:43:27.731358  507339 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0116 03:43:27.731370  507339 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0116 03:43:27.731404  507339 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0116 03:43:27.731441  507339 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0116 03:43:27.731352  507339 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0116 03:43:27.731322  507339 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0116 03:43:27.731322  507339 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0116 03:43:27.733105  507339 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0116 03:43:27.733119  507339 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:43:27.733171  507339 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0116 03:43:27.733171  507339 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0116 03:43:27.733110  507339 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0116 03:43:27.733118  507339 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0116 03:43:27.733113  507339 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0116 03:43:27.733270  507339 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0116 03:43:27.900005  507339 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0116 03:43:27.901232  507339 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0116 03:43:27.903964  507339 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0116 03:43:27.907543  507339 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0116 03:43:27.908417  507339 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0116 03:43:27.909137  507339 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0116 03:43:27.953586  507339 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0116 03:43:28.024252  507339 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0116 03:43:28.024310  507339 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0116 03:43:28.024366  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:43:28.042716  507339 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:43:28.078379  507339 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0116 03:43:28.078438  507339 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0116 03:43:28.078503  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:43:28.179590  507339 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0116 03:43:28.179612  507339 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0116 03:43:28.179661  507339 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0116 03:43:28.179661  507339 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0116 03:43:28.179720  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:43:28.179722  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:43:28.179729  507339 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0116 03:43:28.179750  507339 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0116 03:43:28.179785  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:43:28.179804  507339 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0116 03:43:28.179865  507339 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0116 03:43:28.179906  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:43:28.179812  507339 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0116 03:43:28.179950  507339 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0116 03:43:28.179977  507339 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:43:28.180011  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:43:28.180009  507339 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0116 03:43:28.196999  507339 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0116 03:43:28.197021  507339 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0116 03:43:28.197157  507339 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0116 03:43:28.305002  507339 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0116 03:43:28.305117  507339 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0116 03:43:28.305044  507339 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:43:28.305231  507339 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0116 03:43:28.317016  507339 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0116 03:43:28.317149  507339 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0116 03:43:28.346291  507339 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0116 03:43:28.346393  507339 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0116 03:43:28.346434  507339 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0116 03:43:28.346518  507339 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0116 03:43:28.346547  507339 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0116 03:43:28.346598  507339 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0116 03:43:28.346618  507339 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0116 03:43:28.346631  507339 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0116 03:43:28.346650  507339 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0116 03:43:28.425129  507339 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0116 03:43:28.425217  507339 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0116 03:43:28.425319  507339 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0116 03:43:28.425317  507339 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0116 03:43:28.425377  507339 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0116 03:43:28.425391  507339 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0116 03:43:28.425441  507339 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0116 03:43:29.805277  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:29.805642  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | unable to find current IP address of domain old-k8s-version-696770 in network mk-old-k8s-version-696770
	I0116 03:43:29.805677  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | I0116 03:43:29.805586  508427 retry.go:31] will retry after 734.396993ms: waiting for machine to come up
	I0116 03:43:30.541337  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:30.541794  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | unable to find current IP address of domain old-k8s-version-696770 in network mk-old-k8s-version-696770
	I0116 03:43:30.541828  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | I0116 03:43:30.541741  508427 retry.go:31] will retry after 1.035836118s: waiting for machine to come up
	I0116 03:43:31.579576  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:31.580093  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | unable to find current IP address of domain old-k8s-version-696770 in network mk-old-k8s-version-696770
	I0116 03:43:31.580118  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | I0116 03:43:31.580070  508427 retry.go:31] will retry after 1.723172168s: waiting for machine to come up
	I0116 03:43:33.305247  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:33.305726  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | unable to find current IP address of domain old-k8s-version-696770 in network mk-old-k8s-version-696770
	I0116 03:43:33.305759  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | I0116 03:43:33.305669  508427 retry.go:31] will retry after 1.465747661s: waiting for machine to come up
	I0116 03:43:32.396858  507339 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (4.050189724s)
	I0116 03:43:32.396913  507339 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0116 03:43:32.396956  507339 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (3.971489155s)
	I0116 03:43:32.397006  507339 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0116 03:43:32.397028  507339 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (3.971686012s)
	I0116 03:43:32.397043  507339 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0116 03:43:32.397050  507339 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (4.050383438s)
	I0116 03:43:32.397063  507339 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0116 03:43:32.397093  507339 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0116 03:43:32.397172  507339 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0116 03:43:35.381615  507339 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.98440652s)
	I0116 03:43:35.381660  507339 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0116 03:43:35.381699  507339 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0116 03:43:35.381759  507339 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0116 03:43:34.773552  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:34.774149  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | unable to find current IP address of domain old-k8s-version-696770 in network mk-old-k8s-version-696770
	I0116 03:43:34.774182  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | I0116 03:43:34.774084  508427 retry.go:31] will retry after 1.94747868s: waiting for machine to come up
	I0116 03:43:36.722855  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:36.723416  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | unable to find current IP address of domain old-k8s-version-696770 in network mk-old-k8s-version-696770
	I0116 03:43:36.723448  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | I0116 03:43:36.723365  508427 retry.go:31] will retry after 2.550966562s: waiting for machine to come up
	I0116 03:43:39.276082  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:39.276671  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | unable to find current IP address of domain old-k8s-version-696770 in network mk-old-k8s-version-696770
	I0116 03:43:39.276710  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | I0116 03:43:39.276608  508427 retry.go:31] will retry after 3.317854993s: waiting for machine to come up
	I0116 03:43:38.162725  507339 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.780935577s)
	I0116 03:43:38.162760  507339 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0116 03:43:38.162792  507339 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0116 03:43:38.162843  507339 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0116 03:43:39.527575  507339 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.36469752s)
	I0116 03:43:39.527612  507339 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0116 03:43:39.527639  507339 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0116 03:43:39.527696  507339 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0116 03:43:42.595994  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:42.596424  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | unable to find current IP address of domain old-k8s-version-696770 in network mk-old-k8s-version-696770
	I0116 03:43:42.596458  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | I0116 03:43:42.596364  508427 retry.go:31] will retry after 4.913808783s: waiting for machine to come up
	I0116 03:43:41.690968  507339 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.16323953s)
	I0116 03:43:41.691007  507339 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0116 03:43:41.691045  507339 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0116 03:43:41.691100  507339 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0116 03:43:43.849988  507339 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.158855886s)
	I0116 03:43:43.850023  507339 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0116 03:43:43.850052  507339 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0116 03:43:43.850107  507339 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0116 03:43:44.597660  507339 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0116 03:43:44.597710  507339 cache_images.go:123] Successfully loaded all cached images
	I0116 03:43:44.597715  507339 cache_images.go:92] LoadImages completed in 16.866481277s
	I0116 03:43:44.597788  507339 ssh_runner.go:195] Run: crio config
	I0116 03:43:44.658055  507339 cni.go:84] Creating CNI manager for ""
	I0116 03:43:44.658081  507339 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:43:44.658104  507339 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 03:43:44.658124  507339 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.103 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-666547 NodeName:no-preload-666547 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.103"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.103 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0116 03:43:44.658290  507339 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.103
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-666547"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.103
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.103"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 03:43:44.658371  507339 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-666547 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.103
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-666547 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0116 03:43:44.658431  507339 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0116 03:43:44.668859  507339 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 03:43:44.668934  507339 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0116 03:43:44.678543  507339 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I0116 03:43:44.694998  507339 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0116 03:43:44.711256  507339 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I0116 03:43:44.728203  507339 ssh_runner.go:195] Run: grep 192.168.39.103	control-plane.minikube.internal$ /etc/hosts
	I0116 03:43:44.732219  507339 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.103	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 03:43:44.744687  507339 certs.go:56] Setting up /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/no-preload-666547 for IP: 192.168.39.103
	I0116 03:43:44.744730  507339 certs.go:190] acquiring lock for shared ca certs: {Name:mkab5aeea3e0ed69f3592dc8f418e07c6075b130 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:43:44.744957  507339 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17965-468241/.minikube/ca.key
	I0116 03:43:44.745014  507339 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.key
	I0116 03:43:44.745133  507339 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/no-preload-666547/client.key
	I0116 03:43:44.745226  507339 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/no-preload-666547/apiserver.key.f0189397
	I0116 03:43:44.745293  507339 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/no-preload-666547/proxy-client.key
	I0116 03:43:44.745431  507339 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478.pem (1338 bytes)
	W0116 03:43:44.745471  507339 certs.go:433] ignoring /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478_empty.pem, impossibly tiny 0 bytes
	I0116 03:43:44.745488  507339 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca-key.pem (1679 bytes)
	I0116 03:43:44.745541  507339 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem (1078 bytes)
	I0116 03:43:44.745582  507339 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem (1123 bytes)
	I0116 03:43:44.745620  507339 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem (1679 bytes)
	I0116 03:43:44.745687  507339 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem (1708 bytes)
	I0116 03:43:44.746558  507339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/no-preload-666547/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0116 03:43:44.770889  507339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/no-preload-666547/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0116 03:43:44.795150  507339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/no-preload-666547/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0116 03:43:44.818047  507339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/no-preload-666547/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0116 03:43:44.842003  507339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 03:43:44.866125  507339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0116 03:43:44.890235  507339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 03:43:44.913732  507339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 03:43:44.937249  507339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478.pem --> /usr/share/ca-certificates/475478.pem (1338 bytes)
	I0116 03:43:44.961628  507339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem --> /usr/share/ca-certificates/4754782.pem (1708 bytes)
	I0116 03:43:44.986672  507339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 03:43:45.010735  507339 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0116 03:43:45.028537  507339 ssh_runner.go:195] Run: openssl version
	I0116 03:43:45.034910  507339 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/475478.pem && ln -fs /usr/share/ca-certificates/475478.pem /etc/ssl/certs/475478.pem"
	I0116 03:43:45.046034  507339 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/475478.pem
	I0116 03:43:45.050965  507339 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 02:44 /usr/share/ca-certificates/475478.pem
	I0116 03:43:45.051059  507339 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/475478.pem
	I0116 03:43:45.057465  507339 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/475478.pem /etc/ssl/certs/51391683.0"
	I0116 03:43:45.068400  507339 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4754782.pem && ln -fs /usr/share/ca-certificates/4754782.pem /etc/ssl/certs/4754782.pem"
	I0116 03:43:45.079619  507339 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4754782.pem
	I0116 03:43:45.084545  507339 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 02:44 /usr/share/ca-certificates/4754782.pem
	I0116 03:43:45.084622  507339 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4754782.pem
	I0116 03:43:45.090638  507339 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4754782.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 03:43:45.101658  507339 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 03:43:45.113091  507339 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:43:45.118085  507339 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 02:35 /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:43:45.118153  507339 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:43:45.124100  507339 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 03:43:45.135338  507339 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 03:43:45.140230  507339 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0116 03:43:45.146566  507339 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0116 03:43:45.152839  507339 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0116 03:43:45.158917  507339 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0116 03:43:45.164984  507339 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0116 03:43:45.171049  507339 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0116 03:43:45.177547  507339 kubeadm.go:404] StartCluster: {Name:no-preload-666547 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:no-preload-666547 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.103 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 03:43:45.177657  507339 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0116 03:43:45.177719  507339 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 03:43:45.221757  507339 cri.go:89] found id: ""
	I0116 03:43:45.221848  507339 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0116 03:43:45.233811  507339 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0116 03:43:45.233838  507339 kubeadm.go:636] restartCluster start
	I0116 03:43:45.233906  507339 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0116 03:43:45.244810  507339 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:45.245999  507339 kubeconfig.go:92] found "no-preload-666547" server: "https://192.168.39.103:8443"
	I0116 03:43:45.248711  507339 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0116 03:43:45.260979  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:45.261066  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:45.276682  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:48.709239  507889 start.go:369] acquired machines lock for "default-k8s-diff-port-434445" in 3m31.985691976s
	I0116 03:43:48.709311  507889 start.go:96] Skipping create...Using existing machine configuration
	I0116 03:43:48.709333  507889 fix.go:54] fixHost starting: 
	I0116 03:43:48.709815  507889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:43:48.709867  507889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:43:48.726637  507889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45373
	I0116 03:43:48.727122  507889 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:43:48.727702  507889 main.go:141] libmachine: Using API Version  1
	I0116 03:43:48.727737  507889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:43:48.728104  507889 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:43:48.728310  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .DriverName
	I0116 03:43:48.728475  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetState
	I0116 03:43:48.730338  507889 fix.go:102] recreateIfNeeded on default-k8s-diff-port-434445: state=Stopped err=<nil>
	I0116 03:43:48.730361  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .DriverName
	W0116 03:43:48.730545  507889 fix.go:128] unexpected machine state, will restart: <nil>
	I0116 03:43:48.733848  507889 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-434445" ...
	I0116 03:43:47.512288  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:47.512755  507510 main.go:141] libmachine: (old-k8s-version-696770) Found IP for machine: 192.168.61.167
	I0116 03:43:47.512793  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has current primary IP address 192.168.61.167 and MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:47.512804  507510 main.go:141] libmachine: (old-k8s-version-696770) Reserving static IP address...
	I0116 03:43:47.513157  507510 main.go:141] libmachine: (old-k8s-version-696770) Reserved static IP address: 192.168.61.167
	I0116 03:43:47.513194  507510 main.go:141] libmachine: (old-k8s-version-696770) Waiting for SSH to be available...
	I0116 03:43:47.513218  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | found host DHCP lease matching {name: "old-k8s-version-696770", mac: "52:54:00:37:20:1a", ip: "192.168.61.167"} in network mk-old-k8s-version-696770: {Iface:virbr3 ExpiryTime:2024-01-16 04:43:38 +0000 UTC Type:0 Mac:52:54:00:37:20:1a Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:old-k8s-version-696770 Clientid:01:52:54:00:37:20:1a}
	I0116 03:43:47.513242  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | skip adding static IP to network mk-old-k8s-version-696770 - found existing host DHCP lease matching {name: "old-k8s-version-696770", mac: "52:54:00:37:20:1a", ip: "192.168.61.167"}
	I0116 03:43:47.513259  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | Getting to WaitForSSH function...
	I0116 03:43:47.515438  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:47.515887  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:20:1a", ip: ""} in network mk-old-k8s-version-696770: {Iface:virbr3 ExpiryTime:2024-01-16 04:43:38 +0000 UTC Type:0 Mac:52:54:00:37:20:1a Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:old-k8s-version-696770 Clientid:01:52:54:00:37:20:1a}
	I0116 03:43:47.515928  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined IP address 192.168.61.167 and MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:47.516089  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | Using SSH client type: external
	I0116 03:43:47.516124  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | Using SSH private key: /home/jenkins/minikube-integration/17965-468241/.minikube/machines/old-k8s-version-696770/id_rsa (-rw-------)
	I0116 03:43:47.516160  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.167 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17965-468241/.minikube/machines/old-k8s-version-696770/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0116 03:43:47.516182  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | About to run SSH command:
	I0116 03:43:47.516203  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | exit 0
	I0116 03:43:47.608193  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | SSH cmd err, output: <nil>: 
	I0116 03:43:47.608599  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetConfigRaw
	I0116 03:43:47.609195  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetIP
	I0116 03:43:47.611633  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:47.612018  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:20:1a", ip: ""} in network mk-old-k8s-version-696770: {Iface:virbr3 ExpiryTime:2024-01-16 04:43:38 +0000 UTC Type:0 Mac:52:54:00:37:20:1a Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:old-k8s-version-696770 Clientid:01:52:54:00:37:20:1a}
	I0116 03:43:47.612068  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined IP address 192.168.61.167 and MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:47.612355  507510 profile.go:148] Saving config to /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/old-k8s-version-696770/config.json ...
	I0116 03:43:47.612601  507510 machine.go:88] provisioning docker machine ...
	I0116 03:43:47.612628  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .DriverName
	I0116 03:43:47.612872  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetMachineName
	I0116 03:43:47.613047  507510 buildroot.go:166] provisioning hostname "old-k8s-version-696770"
	I0116 03:43:47.613068  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetMachineName
	I0116 03:43:47.613195  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHHostname
	I0116 03:43:47.615457  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:47.615901  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:20:1a", ip: ""} in network mk-old-k8s-version-696770: {Iface:virbr3 ExpiryTime:2024-01-16 04:43:38 +0000 UTC Type:0 Mac:52:54:00:37:20:1a Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:old-k8s-version-696770 Clientid:01:52:54:00:37:20:1a}
	I0116 03:43:47.615928  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined IP address 192.168.61.167 and MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:47.616111  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHPort
	I0116 03:43:47.616292  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHKeyPath
	I0116 03:43:47.616489  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHKeyPath
	I0116 03:43:47.616687  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHUsername
	I0116 03:43:47.616889  507510 main.go:141] libmachine: Using SSH client type: native
	I0116 03:43:47.617280  507510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.167 22 <nil> <nil>}
	I0116 03:43:47.617297  507510 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-696770 && echo "old-k8s-version-696770" | sudo tee /etc/hostname
	I0116 03:43:47.745448  507510 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-696770
	
	I0116 03:43:47.745482  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHHostname
	I0116 03:43:47.748812  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:47.749135  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:20:1a", ip: ""} in network mk-old-k8s-version-696770: {Iface:virbr3 ExpiryTime:2024-01-16 04:43:38 +0000 UTC Type:0 Mac:52:54:00:37:20:1a Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:old-k8s-version-696770 Clientid:01:52:54:00:37:20:1a}
	I0116 03:43:47.749171  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined IP address 192.168.61.167 and MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:47.749296  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHPort
	I0116 03:43:47.749525  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHKeyPath
	I0116 03:43:47.749715  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHKeyPath
	I0116 03:43:47.749872  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHUsername
	I0116 03:43:47.750019  507510 main.go:141] libmachine: Using SSH client type: native
	I0116 03:43:47.750339  507510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.167 22 <nil> <nil>}
	I0116 03:43:47.750357  507510 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-696770' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-696770/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-696770' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 03:43:47.876917  507510 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 03:43:47.876957  507510 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17965-468241/.minikube CaCertPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17965-468241/.minikube}
	I0116 03:43:47.877011  507510 buildroot.go:174] setting up certificates
	I0116 03:43:47.877026  507510 provision.go:83] configureAuth start
	I0116 03:43:47.877041  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetMachineName
	I0116 03:43:47.877378  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetIP
	I0116 03:43:47.880453  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:47.880836  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:20:1a", ip: ""} in network mk-old-k8s-version-696770: {Iface:virbr3 ExpiryTime:2024-01-16 04:43:38 +0000 UTC Type:0 Mac:52:54:00:37:20:1a Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:old-k8s-version-696770 Clientid:01:52:54:00:37:20:1a}
	I0116 03:43:47.880869  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined IP address 192.168.61.167 and MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:47.881010  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHHostname
	I0116 03:43:47.883053  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:47.883415  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:20:1a", ip: ""} in network mk-old-k8s-version-696770: {Iface:virbr3 ExpiryTime:2024-01-16 04:43:38 +0000 UTC Type:0 Mac:52:54:00:37:20:1a Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:old-k8s-version-696770 Clientid:01:52:54:00:37:20:1a}
	I0116 03:43:47.883448  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined IP address 192.168.61.167 and MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:47.883635  507510 provision.go:138] copyHostCerts
	I0116 03:43:47.883706  507510 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem, removing ...
	I0116 03:43:47.883717  507510 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem
	I0116 03:43:47.883778  507510 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem (1078 bytes)
	I0116 03:43:47.883864  507510 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem, removing ...
	I0116 03:43:47.883871  507510 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem
	I0116 03:43:47.883893  507510 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem (1123 bytes)
	I0116 03:43:47.883943  507510 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem, removing ...
	I0116 03:43:47.883950  507510 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem
	I0116 03:43:47.883965  507510 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem (1679 bytes)
	I0116 03:43:47.884010  507510 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-696770 san=[192.168.61.167 192.168.61.167 localhost 127.0.0.1 minikube old-k8s-version-696770]
	I0116 03:43:47.946258  507510 provision.go:172] copyRemoteCerts
	I0116 03:43:47.946327  507510 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 03:43:47.946354  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHHostname
	I0116 03:43:47.949417  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:47.949750  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:20:1a", ip: ""} in network mk-old-k8s-version-696770: {Iface:virbr3 ExpiryTime:2024-01-16 04:43:38 +0000 UTC Type:0 Mac:52:54:00:37:20:1a Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:old-k8s-version-696770 Clientid:01:52:54:00:37:20:1a}
	I0116 03:43:47.949784  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined IP address 192.168.61.167 and MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:47.949941  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHPort
	I0116 03:43:47.950180  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHKeyPath
	I0116 03:43:47.950333  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHUsername
	I0116 03:43:47.950478  507510 sshutil.go:53] new ssh client: &{IP:192.168.61.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/old-k8s-version-696770/id_rsa Username:docker}
	I0116 03:43:48.042564  507510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0116 03:43:48.066519  507510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0116 03:43:48.090127  507510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0116 03:43:48.113387  507510 provision.go:86] duration metric: configureAuth took 236.343393ms
	I0116 03:43:48.113428  507510 buildroot.go:189] setting minikube options for container-runtime
	I0116 03:43:48.113662  507510 config.go:182] Loaded profile config "old-k8s-version-696770": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0116 03:43:48.113758  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHHostname
	I0116 03:43:48.116735  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:48.117144  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:20:1a", ip: ""} in network mk-old-k8s-version-696770: {Iface:virbr3 ExpiryTime:2024-01-16 04:43:38 +0000 UTC Type:0 Mac:52:54:00:37:20:1a Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:old-k8s-version-696770 Clientid:01:52:54:00:37:20:1a}
	I0116 03:43:48.117187  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined IP address 192.168.61.167 and MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:48.117328  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHPort
	I0116 03:43:48.117529  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHKeyPath
	I0116 03:43:48.117725  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHKeyPath
	I0116 03:43:48.117892  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHUsername
	I0116 03:43:48.118118  507510 main.go:141] libmachine: Using SSH client type: native
	I0116 03:43:48.118427  507510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.167 22 <nil> <nil>}
	I0116 03:43:48.118450  507510 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 03:43:48.458094  507510 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 03:43:48.458129  507510 machine.go:91] provisioned docker machine in 845.51167ms
	I0116 03:43:48.458141  507510 start.go:300] post-start starting for "old-k8s-version-696770" (driver="kvm2")
	I0116 03:43:48.458153  507510 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 03:43:48.458172  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .DriverName
	I0116 03:43:48.458616  507510 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 03:43:48.458650  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHHostname
	I0116 03:43:48.461476  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:48.461858  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:20:1a", ip: ""} in network mk-old-k8s-version-696770: {Iface:virbr3 ExpiryTime:2024-01-16 04:43:38 +0000 UTC Type:0 Mac:52:54:00:37:20:1a Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:old-k8s-version-696770 Clientid:01:52:54:00:37:20:1a}
	I0116 03:43:48.461908  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined IP address 192.168.61.167 and MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:48.462029  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHPort
	I0116 03:43:48.462272  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHKeyPath
	I0116 03:43:48.462460  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHUsername
	I0116 03:43:48.462643  507510 sshutil.go:53] new ssh client: &{IP:192.168.61.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/old-k8s-version-696770/id_rsa Username:docker}
	I0116 03:43:48.550436  507510 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 03:43:48.555225  507510 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 03:43:48.555261  507510 filesync.go:126] Scanning /home/jenkins/minikube-integration/17965-468241/.minikube/addons for local assets ...
	I0116 03:43:48.555349  507510 filesync.go:126] Scanning /home/jenkins/minikube-integration/17965-468241/.minikube/files for local assets ...
	I0116 03:43:48.555434  507510 filesync.go:149] local asset: /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem -> 4754782.pem in /etc/ssl/certs
	I0116 03:43:48.555560  507510 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 03:43:48.565598  507510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem --> /etc/ssl/certs/4754782.pem (1708 bytes)
	I0116 03:43:48.588611  507510 start.go:303] post-start completed in 130.45305ms
	I0116 03:43:48.588642  507510 fix.go:56] fixHost completed within 22.411021213s
	I0116 03:43:48.588675  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHHostname
	I0116 03:43:48.591220  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:48.591582  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:20:1a", ip: ""} in network mk-old-k8s-version-696770: {Iface:virbr3 ExpiryTime:2024-01-16 04:43:38 +0000 UTC Type:0 Mac:52:54:00:37:20:1a Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:old-k8s-version-696770 Clientid:01:52:54:00:37:20:1a}
	I0116 03:43:48.591618  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined IP address 192.168.61.167 and MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:48.591779  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHPort
	I0116 03:43:48.592014  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHKeyPath
	I0116 03:43:48.592216  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHKeyPath
	I0116 03:43:48.592412  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHUsername
	I0116 03:43:48.592567  507510 main.go:141] libmachine: Using SSH client type: native
	I0116 03:43:48.592933  507510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.167 22 <nil> <nil>}
	I0116 03:43:48.592950  507510 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0116 03:43:48.709079  507510 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705376628.651647278
	
	I0116 03:43:48.709103  507510 fix.go:206] guest clock: 1705376628.651647278
	I0116 03:43:48.709111  507510 fix.go:219] Guest: 2024-01-16 03:43:48.651647278 +0000 UTC Remote: 2024-01-16 03:43:48.588648172 +0000 UTC m=+299.078902394 (delta=62.999106ms)
	I0116 03:43:48.709134  507510 fix.go:190] guest clock delta is within tolerance: 62.999106ms
	I0116 03:43:48.709140  507510 start.go:83] releasing machines lock for "old-k8s-version-696770", held for 22.531556099s
	I0116 03:43:48.709169  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .DriverName
	I0116 03:43:48.709519  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetIP
	I0116 03:43:48.712438  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:48.712770  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:20:1a", ip: ""} in network mk-old-k8s-version-696770: {Iface:virbr3 ExpiryTime:2024-01-16 04:43:38 +0000 UTC Type:0 Mac:52:54:00:37:20:1a Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:old-k8s-version-696770 Clientid:01:52:54:00:37:20:1a}
	I0116 03:43:48.712825  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined IP address 192.168.61.167 and MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:48.712921  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .DriverName
	I0116 03:43:48.713501  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .DriverName
	I0116 03:43:48.713677  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .DriverName
	I0116 03:43:48.713768  507510 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 03:43:48.713816  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHHostname
	I0116 03:43:48.713920  507510 ssh_runner.go:195] Run: cat /version.json
	I0116 03:43:48.713951  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHHostname
	I0116 03:43:48.716415  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:48.716697  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:48.716820  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:20:1a", ip: ""} in network mk-old-k8s-version-696770: {Iface:virbr3 ExpiryTime:2024-01-16 04:43:38 +0000 UTC Type:0 Mac:52:54:00:37:20:1a Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:old-k8s-version-696770 Clientid:01:52:54:00:37:20:1a}
	I0116 03:43:48.716846  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined IP address 192.168.61.167 and MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:48.716995  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHPort
	I0116 03:43:48.717093  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:20:1a", ip: ""} in network mk-old-k8s-version-696770: {Iface:virbr3 ExpiryTime:2024-01-16 04:43:38 +0000 UTC Type:0 Mac:52:54:00:37:20:1a Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:old-k8s-version-696770 Clientid:01:52:54:00:37:20:1a}
	I0116 03:43:48.717123  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined IP address 192.168.61.167 and MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:48.717394  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHPort
	I0116 03:43:48.717402  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHKeyPath
	I0116 03:43:48.717638  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHKeyPath
	I0116 03:43:48.717650  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHUsername
	I0116 03:43:48.717791  507510 sshutil.go:53] new ssh client: &{IP:192.168.61.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/old-k8s-version-696770/id_rsa Username:docker}
	I0116 03:43:48.717824  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHUsername
	I0116 03:43:48.717956  507510 sshutil.go:53] new ssh client: &{IP:192.168.61.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/old-k8s-version-696770/id_rsa Username:docker}
	I0116 03:43:48.838506  507510 ssh_runner.go:195] Run: systemctl --version
	I0116 03:43:48.845152  507510 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 03:43:49.001791  507510 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0116 03:43:49.008474  507510 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 03:43:49.008558  507510 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 03:43:49.024030  507510 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0116 03:43:49.024087  507510 start.go:475] detecting cgroup driver to use...
	I0116 03:43:49.024164  507510 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 03:43:49.038853  507510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 03:43:49.056228  507510 docker.go:217] disabling cri-docker service (if available) ...
	I0116 03:43:49.056308  507510 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 03:43:49.071266  507510 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 03:43:49.085793  507510 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 03:43:49.211294  507510 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 03:43:49.338893  507510 docker.go:233] disabling docker service ...
	I0116 03:43:49.338971  507510 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 03:43:49.354423  507510 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 03:43:49.367355  507510 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 03:43:49.483277  507510 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 03:43:49.593977  507510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 03:43:49.607374  507510 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 03:43:49.626781  507510 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0116 03:43:49.626846  507510 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:43:49.637809  507510 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 03:43:49.637892  507510 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:43:49.648162  507510 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:43:49.658305  507510 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:43:49.669557  507510 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 03:43:49.680190  507510 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 03:43:49.689125  507510 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0116 03:43:49.689199  507510 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0116 03:43:49.703247  507510 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 03:43:49.713826  507510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 03:43:49.829677  507510 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 03:43:50.009393  507510 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 03:43:50.009489  507510 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 03:43:50.016443  507510 start.go:543] Will wait 60s for crictl version
	I0116 03:43:50.016521  507510 ssh_runner.go:195] Run: which crictl
	I0116 03:43:50.020560  507510 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 03:43:50.056652  507510 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0116 03:43:50.056733  507510 ssh_runner.go:195] Run: crio --version
	I0116 03:43:50.104202  507510 ssh_runner.go:195] Run: crio --version
	I0116 03:43:50.150215  507510 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0116 03:43:45.761989  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:45.762077  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:45.776377  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:46.262107  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:46.262205  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:46.274748  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:46.761344  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:46.761434  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:46.773509  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:47.261093  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:47.261222  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:47.272584  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:47.761119  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:47.761204  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:47.773674  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:48.261288  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:48.261448  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:48.273461  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:48.762071  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:48.762205  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:48.778093  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:49.261032  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:49.261139  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:49.273090  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:49.761233  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:49.761348  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:49.773529  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:50.261720  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:50.261822  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:50.277403  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:48.735627  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .Start
	I0116 03:43:48.735865  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Ensuring networks are active...
	I0116 03:43:48.736708  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Ensuring network default is active
	I0116 03:43:48.737105  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Ensuring network mk-default-k8s-diff-port-434445 is active
	I0116 03:43:48.737445  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Getting domain xml...
	I0116 03:43:48.738086  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Creating domain...
	I0116 03:43:49.085479  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting to get IP...
	I0116 03:43:49.086513  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:43:49.086907  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | unable to find current IP address of domain default-k8s-diff-port-434445 in network mk-default-k8s-diff-port-434445
	I0116 03:43:49.086993  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | I0116 03:43:49.086879  508579 retry.go:31] will retry after 251.682416ms: waiting for machine to come up
	I0116 03:43:49.340560  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:43:49.341196  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | unable to find current IP address of domain default-k8s-diff-port-434445 in network mk-default-k8s-diff-port-434445
	I0116 03:43:49.341235  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | I0116 03:43:49.341140  508579 retry.go:31] will retry after 288.322607ms: waiting for machine to come up
	I0116 03:43:49.630920  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:43:49.631449  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | unable to find current IP address of domain default-k8s-diff-port-434445 in network mk-default-k8s-diff-port-434445
	I0116 03:43:49.631478  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | I0116 03:43:49.631404  508579 retry.go:31] will retry after 305.730946ms: waiting for machine to come up
	I0116 03:43:49.938846  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:43:49.939357  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | unable to find current IP address of domain default-k8s-diff-port-434445 in network mk-default-k8s-diff-port-434445
	I0116 03:43:49.939381  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | I0116 03:43:49.939307  508579 retry.go:31] will retry after 431.952943ms: waiting for machine to come up
	I0116 03:43:50.372921  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:43:50.373426  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | unable to find current IP address of domain default-k8s-diff-port-434445 in network mk-default-k8s-diff-port-434445
	I0116 03:43:50.373453  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | I0116 03:43:50.373368  508579 retry.go:31] will retry after 557.336026ms: waiting for machine to come up
	I0116 03:43:50.932300  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:43:50.932902  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | unable to find current IP address of domain default-k8s-diff-port-434445 in network mk-default-k8s-diff-port-434445
	I0116 03:43:50.932933  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | I0116 03:43:50.932837  508579 retry.go:31] will retry after 652.034162ms: waiting for machine to come up
	I0116 03:43:51.586765  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:43:51.587332  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | unable to find current IP address of domain default-k8s-diff-port-434445 in network mk-default-k8s-diff-port-434445
	I0116 03:43:51.587365  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | I0116 03:43:51.587290  508579 retry.go:31] will retry after 1.078418867s: waiting for machine to come up
	I0116 03:43:50.151763  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetIP
	I0116 03:43:50.154861  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:50.155283  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:20:1a", ip: ""} in network mk-old-k8s-version-696770: {Iface:virbr3 ExpiryTime:2024-01-16 04:43:38 +0000 UTC Type:0 Mac:52:54:00:37:20:1a Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:old-k8s-version-696770 Clientid:01:52:54:00:37:20:1a}
	I0116 03:43:50.155331  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined IP address 192.168.61.167 and MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:50.155536  507510 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0116 03:43:50.160159  507510 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 03:43:50.173354  507510 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0116 03:43:50.173416  507510 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 03:43:50.227220  507510 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0116 03:43:50.227308  507510 ssh_runner.go:195] Run: which lz4
	I0116 03:43:50.231565  507510 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0116 03:43:50.236133  507510 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0116 03:43:50.236169  507510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0116 03:43:52.243584  507510 crio.go:444] Took 2.012049 seconds to copy over tarball
	I0116 03:43:52.243686  507510 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0116 03:43:50.761232  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:50.761323  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:50.777877  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:51.261357  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:51.261444  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:51.280624  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:51.761117  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:51.761225  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:51.775076  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:52.261857  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:52.261948  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:52.279844  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:52.761400  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:52.761493  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:52.773869  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:53.261155  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:53.261263  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:53.273774  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:53.761370  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:53.761500  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:53.773900  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:54.262012  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:54.262134  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:54.277928  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:54.761492  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:54.761642  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:54.774531  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:55.261302  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:55.261395  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:55.274178  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:55.274226  507339 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0116 03:43:55.274272  507339 kubeadm.go:1135] stopping kube-system containers ...
	I0116 03:43:55.274293  507339 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0116 03:43:55.274360  507339 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 03:43:55.321847  507339 cri.go:89] found id: ""
	I0116 03:43:55.321943  507339 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0116 03:43:55.339190  507339 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 03:43:55.348548  507339 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 03:43:55.348637  507339 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 03:43:55.358316  507339 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0116 03:43:55.358345  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:43:55.492932  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:43:52.667882  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:43:52.668380  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | unable to find current IP address of domain default-k8s-diff-port-434445 in network mk-default-k8s-diff-port-434445
	I0116 03:43:52.668415  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | I0116 03:43:52.668311  508579 retry.go:31] will retry after 1.052441827s: waiting for machine to come up
	I0116 03:43:53.722859  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:43:53.723473  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | unable to find current IP address of domain default-k8s-diff-port-434445 in network mk-default-k8s-diff-port-434445
	I0116 03:43:53.723503  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | I0116 03:43:53.723429  508579 retry.go:31] will retry after 1.233090848s: waiting for machine to come up
	I0116 03:43:54.958519  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:43:54.958990  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | unable to find current IP address of domain default-k8s-diff-port-434445 in network mk-default-k8s-diff-port-434445
	I0116 03:43:54.959014  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | I0116 03:43:54.958934  508579 retry.go:31] will retry after 2.038449182s: waiting for machine to come up
	I0116 03:43:55.109598  507510 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.865872133s)
	I0116 03:43:55.109637  507510 crio.go:451] Took 2.866019 seconds to extract the tarball
	I0116 03:43:55.109652  507510 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0116 03:43:55.150763  507510 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 03:43:55.206497  507510 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0116 03:43:55.206525  507510 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0116 03:43:55.206597  507510 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:43:55.206619  507510 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0116 03:43:55.206660  507510 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0116 03:43:55.206682  507510 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0116 03:43:55.206601  507510 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0116 03:43:55.206622  507510 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0116 03:43:55.206790  507510 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0116 03:43:55.206801  507510 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0116 03:43:55.208228  507510 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:43:55.208246  507510 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0116 03:43:55.208245  507510 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0116 03:43:55.208247  507510 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0116 03:43:55.208291  507510 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0116 03:43:55.208295  507510 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0116 03:43:55.208291  507510 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0116 03:43:55.208610  507510 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0116 03:43:55.364082  507510 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0116 03:43:55.364096  507510 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0116 03:43:55.367820  507510 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0116 03:43:55.371639  507510 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0116 03:43:55.379423  507510 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0116 03:43:55.383569  507510 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0116 03:43:55.385854  507510 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0116 03:43:55.522241  507510 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:43:55.539971  507510 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0116 03:43:55.540031  507510 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0116 03:43:55.540113  507510 ssh_runner.go:195] Run: which crictl
	I0116 03:43:55.542332  507510 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0116 03:43:55.542389  507510 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0116 03:43:55.542441  507510 ssh_runner.go:195] Run: which crictl
	I0116 03:43:55.565552  507510 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0116 03:43:55.565679  507510 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0116 03:43:55.565761  507510 ssh_runner.go:195] Run: which crictl
	I0116 03:43:55.583839  507510 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0116 03:43:55.583890  507510 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0116 03:43:55.583942  507510 ssh_runner.go:195] Run: which crictl
	I0116 03:43:55.583847  507510 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0116 03:43:55.584023  507510 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0116 03:43:55.584073  507510 ssh_runner.go:195] Run: which crictl
	I0116 03:43:55.596487  507510 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0116 03:43:55.596555  507510 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0116 03:43:55.596619  507510 ssh_runner.go:195] Run: which crictl
	I0116 03:43:55.605042  507510 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0116 03:43:55.605105  507510 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0116 03:43:55.605162  507510 ssh_runner.go:195] Run: which crictl
	I0116 03:43:55.740186  507510 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0116 03:43:55.740225  507510 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0116 03:43:55.740283  507510 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0116 03:43:55.740334  507510 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0116 03:43:55.740384  507510 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0116 03:43:55.740441  507510 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0116 03:43:55.740450  507510 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0116 03:43:55.900542  507510 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0116 03:43:55.906506  507510 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0116 03:43:55.914158  507510 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0116 03:43:55.914171  507510 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0116 03:43:55.926953  507510 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0116 03:43:55.927034  507510 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0116 03:43:55.927137  507510 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0116 03:43:55.927186  507510 cache_images.go:92] LoadImages completed in 720.646435ms
	W0116 03:43:55.927280  507510 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0: no such file or directory
	I0116 03:43:55.927362  507510 ssh_runner.go:195] Run: crio config
	I0116 03:43:55.989408  507510 cni.go:84] Creating CNI manager for ""
	I0116 03:43:55.989440  507510 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:43:55.989468  507510 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 03:43:55.989495  507510 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.167 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-696770 NodeName:old-k8s-version-696770 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.167"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.167 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0116 03:43:55.989657  507510 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.167
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-696770"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.167
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.167"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-696770
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.61.167:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 03:43:55.989757  507510 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-696770 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.167
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-696770 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0116 03:43:55.989819  507510 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0116 03:43:55.999676  507510 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 03:43:55.999766  507510 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0116 03:43:56.009179  507510 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0116 03:43:56.028479  507510 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0116 03:43:56.045979  507510 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2181 bytes)
	I0116 03:43:56.067179  507510 ssh_runner.go:195] Run: grep 192.168.61.167	control-plane.minikube.internal$ /etc/hosts
	I0116 03:43:56.071532  507510 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.167	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 03:43:56.085960  507510 certs.go:56] Setting up /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/old-k8s-version-696770 for IP: 192.168.61.167
	I0116 03:43:56.086006  507510 certs.go:190] acquiring lock for shared ca certs: {Name:mkab5aeea3e0ed69f3592dc8f418e07c6075b130 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:43:56.086216  507510 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17965-468241/.minikube/ca.key
	I0116 03:43:56.086293  507510 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.key
	I0116 03:43:56.086385  507510 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/old-k8s-version-696770/client.key
	I0116 03:43:56.086447  507510 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/old-k8s-version-696770/apiserver.key.1a2d2382
	I0116 03:43:56.086480  507510 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/old-k8s-version-696770/proxy-client.key
	I0116 03:43:56.086668  507510 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478.pem (1338 bytes)
	W0116 03:43:56.086711  507510 certs.go:433] ignoring /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478_empty.pem, impossibly tiny 0 bytes
	I0116 03:43:56.086721  507510 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca-key.pem (1679 bytes)
	I0116 03:43:56.086746  507510 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem (1078 bytes)
	I0116 03:43:56.086772  507510 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem (1123 bytes)
	I0116 03:43:56.086795  507510 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem (1679 bytes)
	I0116 03:43:56.086833  507510 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem (1708 bytes)
	I0116 03:43:56.087557  507510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/old-k8s-version-696770/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0116 03:43:56.118148  507510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/old-k8s-version-696770/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0116 03:43:56.146632  507510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/old-k8s-version-696770/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0116 03:43:56.177146  507510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/old-k8s-version-696770/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0116 03:43:56.208800  507510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 03:43:56.237097  507510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0116 03:43:56.264559  507510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 03:43:56.294383  507510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 03:43:56.323966  507510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478.pem --> /usr/share/ca-certificates/475478.pem (1338 bytes)
	I0116 03:43:56.350120  507510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem --> /usr/share/ca-certificates/4754782.pem (1708 bytes)
	I0116 03:43:56.379523  507510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 03:43:56.406312  507510 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0116 03:43:56.426149  507510 ssh_runner.go:195] Run: openssl version
	I0116 03:43:56.432150  507510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 03:43:56.443200  507510 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:43:56.448268  507510 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 02:35 /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:43:56.448343  507510 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:43:56.454227  507510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 03:43:56.464467  507510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/475478.pem && ln -fs /usr/share/ca-certificates/475478.pem /etc/ssl/certs/475478.pem"
	I0116 03:43:56.474769  507510 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/475478.pem
	I0116 03:43:56.480143  507510 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 02:44 /usr/share/ca-certificates/475478.pem
	I0116 03:43:56.480228  507510 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/475478.pem
	I0116 03:43:56.487996  507510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/475478.pem /etc/ssl/certs/51391683.0"
	I0116 03:43:56.501097  507510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4754782.pem && ln -fs /usr/share/ca-certificates/4754782.pem /etc/ssl/certs/4754782.pem"
	I0116 03:43:56.513266  507510 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4754782.pem
	I0116 03:43:56.518806  507510 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 02:44 /usr/share/ca-certificates/4754782.pem
	I0116 03:43:56.518891  507510 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4754782.pem
	I0116 03:43:56.527891  507510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4754782.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 03:43:56.538719  507510 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 03:43:56.544298  507510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0116 03:43:56.551048  507510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0116 03:43:56.557847  507510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0116 03:43:56.567757  507510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0116 03:43:56.575977  507510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0116 03:43:56.584514  507510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0116 03:43:56.593191  507510 kubeadm.go:404] StartCluster: {Name:old-k8s-version-696770 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-696770 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.167 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 03:43:56.593333  507510 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0116 03:43:56.593408  507510 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 03:43:56.653791  507510 cri.go:89] found id: ""
	I0116 03:43:56.653899  507510 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0116 03:43:56.667037  507510 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0116 03:43:56.667078  507510 kubeadm.go:636] restartCluster start
	I0116 03:43:56.667164  507510 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0116 03:43:56.679734  507510 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:56.681241  507510 kubeconfig.go:92] found "old-k8s-version-696770" server: "https://192.168.61.167:8443"
	I0116 03:43:56.683942  507510 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0116 03:43:56.696409  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:43:56.696507  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:56.713120  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:57.196652  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:43:57.196826  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:57.213992  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:57.697096  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:43:57.697197  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:57.709671  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:58.197291  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:43:58.197401  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:58.214351  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:58.696893  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:43:58.697036  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:58.714549  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:59.197173  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:43:59.197304  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:59.213885  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:56.773238  507339 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.280261968s)
	I0116 03:43:56.773267  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:43:57.046716  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:43:57.123831  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:43:57.221179  507339 api_server.go:52] waiting for apiserver process to appear ...
	I0116 03:43:57.221300  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:43:57.721940  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:43:58.222437  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:43:58.722256  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:43:59.222191  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:43:59.721451  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:43:59.753520  507339 api_server.go:72] duration metric: took 2.532341035s to wait for apiserver process to appear ...
	I0116 03:43:59.753556  507339 api_server.go:88] waiting for apiserver healthz status ...
	I0116 03:43:59.753601  507339 api_server.go:253] Checking apiserver healthz at https://192.168.39.103:8443/healthz ...
	I0116 03:43:59.754176  507339 api_server.go:269] stopped: https://192.168.39.103:8443/healthz: Get "https://192.168.39.103:8443/healthz": dial tcp 192.168.39.103:8443: connect: connection refused
	I0116 03:44:00.253773  507339 api_server.go:253] Checking apiserver healthz at https://192.168.39.103:8443/healthz ...
	I0116 03:43:57.000501  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:43:57.070966  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | unable to find current IP address of domain default-k8s-diff-port-434445 in network mk-default-k8s-diff-port-434445
	I0116 03:43:57.071015  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | I0116 03:43:57.000987  508579 retry.go:31] will retry after 1.963105502s: waiting for machine to come up
	I0116 03:43:58.966528  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:43:58.967131  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | unable to find current IP address of domain default-k8s-diff-port-434445 in network mk-default-k8s-diff-port-434445
	I0116 03:43:58.967173  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | I0116 03:43:58.967069  508579 retry.go:31] will retry after 2.871455928s: waiting for machine to come up
	I0116 03:43:59.697215  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:43:59.697303  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:59.713992  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:00.196535  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:44:00.196649  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:00.212663  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:00.697276  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:44:00.697390  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:00.714622  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:01.197125  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:44:01.197242  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:01.214976  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:01.696506  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:44:01.696612  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:01.708204  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:02.197402  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:44:02.197519  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:02.211062  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:02.697230  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:44:02.697358  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:02.710340  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:03.196949  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:44:03.197047  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:03.213169  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:03.696657  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:44:03.696793  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:03.709422  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:04.196970  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:44:04.197083  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:04.209280  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:03.473725  507339 api_server.go:279] https://192.168.39.103:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 03:44:03.473764  507339 api_server.go:103] status: https://192.168.39.103:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 03:44:03.473784  507339 api_server.go:253] Checking apiserver healthz at https://192.168.39.103:8443/healthz ...
	I0116 03:44:03.531825  507339 api_server.go:279] https://192.168.39.103:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 03:44:03.531873  507339 api_server.go:103] status: https://192.168.39.103:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 03:44:03.754148  507339 api_server.go:253] Checking apiserver healthz at https://192.168.39.103:8443/healthz ...
	I0116 03:44:03.759138  507339 api_server.go:279] https://192.168.39.103:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 03:44:03.759171  507339 api_server.go:103] status: https://192.168.39.103:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 03:44:04.254321  507339 api_server.go:253] Checking apiserver healthz at https://192.168.39.103:8443/healthz ...
	I0116 03:44:04.259317  507339 api_server.go:279] https://192.168.39.103:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 03:44:04.259350  507339 api_server.go:103] status: https://192.168.39.103:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 03:44:04.753890  507339 api_server.go:253] Checking apiserver healthz at https://192.168.39.103:8443/healthz ...
	I0116 03:44:04.759714  507339 api_server.go:279] https://192.168.39.103:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 03:44:04.759747  507339 api_server.go:103] status: https://192.168.39.103:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 03:44:05.254582  507339 api_server.go:253] Checking apiserver healthz at https://192.168.39.103:8443/healthz ...
	I0116 03:44:05.264904  507339 api_server.go:279] https://192.168.39.103:8443/healthz returned 200:
	ok
	I0116 03:44:05.283700  507339 api_server.go:141] control plane version: v1.29.0-rc.2
	I0116 03:44:05.283737  507339 api_server.go:131] duration metric: took 5.53017208s to wait for apiserver health ...
	I0116 03:44:05.283749  507339 cni.go:84] Creating CNI manager for ""
	I0116 03:44:05.283757  507339 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:44:05.285715  507339 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 03:44:05.287393  507339 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 03:44:05.327883  507339 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 03:44:05.371856  507339 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 03:44:05.382614  507339 system_pods.go:59] 8 kube-system pods found
	I0116 03:44:05.382656  507339 system_pods.go:61] "coredns-76f75df574-lr95b" [15dc0b11-f7ec-4729-bbfa-79b9649fbad6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0116 03:44:05.382666  507339 system_pods.go:61] "etcd-no-preload-666547" [f98dcc57-5869-4a35-b443-2e6c9beddd88] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0116 03:44:05.382682  507339 system_pods.go:61] "kube-apiserver-no-preload-666547" [3aceae9d-224a-4e8c-a02d-ad4199d8d558] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0116 03:44:05.382699  507339 system_pods.go:61] "kube-controller-manager-no-preload-666547" [af39c89c-3b41-4d6f-b075-16de04a3ecc0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0116 03:44:05.382706  507339 system_pods.go:61] "kube-proxy-dcmrn" [1e91c96f-cbc5-424d-a09e-06e34bf7a2e2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0116 03:44:05.382714  507339 system_pods.go:61] "kube-scheduler-no-preload-666547" [0f2aa713-aebc-4858-a348-32169021235e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0116 03:44:05.382723  507339 system_pods.go:61] "metrics-server-57f55c9bc5-78vfj" [dbd2d3b2-ec0f-4253-8549-7c4299522c37] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:44:05.382735  507339 system_pods.go:61] "storage-provisioner" [f4e1ba45-217d-41d5-b583-2f60044879bc] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0116 03:44:05.382749  507339 system_pods.go:74] duration metric: took 10.858851ms to wait for pod list to return data ...
	I0116 03:44:05.382760  507339 node_conditions.go:102] verifying NodePressure condition ...
	I0116 03:44:05.391050  507339 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 03:44:05.391112  507339 node_conditions.go:123] node cpu capacity is 2
	I0116 03:44:05.391128  507339 node_conditions.go:105] duration metric: took 8.361426ms to run NodePressure ...
	I0116 03:44:05.391152  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:44:01.840907  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:01.841317  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | unable to find current IP address of domain default-k8s-diff-port-434445 in network mk-default-k8s-diff-port-434445
	I0116 03:44:01.841361  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | I0116 03:44:01.841259  508579 retry.go:31] will retry after 3.769759015s: waiting for machine to come up
	I0116 03:44:05.613594  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:05.614119  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | unable to find current IP address of domain default-k8s-diff-port-434445 in network mk-default-k8s-diff-port-434445
	I0116 03:44:05.614149  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | I0116 03:44:05.614062  508579 retry.go:31] will retry after 3.5833584s: waiting for machine to come up
	I0116 03:44:05.740205  507339 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0116 03:44:05.745269  507339 kubeadm.go:787] kubelet initialised
	I0116 03:44:05.745297  507339 kubeadm.go:788] duration metric: took 5.059802ms waiting for restarted kubelet to initialise ...
	I0116 03:44:05.745306  507339 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:44:05.751403  507339 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-lr95b" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:05.761740  507339 pod_ready.go:97] node "no-preload-666547" hosting pod "coredns-76f75df574-lr95b" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-666547" has status "Ready":"False"
	I0116 03:44:05.761784  507339 pod_ready.go:81] duration metric: took 10.344994ms waiting for pod "coredns-76f75df574-lr95b" in "kube-system" namespace to be "Ready" ...
	E0116 03:44:05.761796  507339 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-666547" hosting pod "coredns-76f75df574-lr95b" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-666547" has status "Ready":"False"
	I0116 03:44:05.761812  507339 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-666547" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:05.767627  507339 pod_ready.go:97] node "no-preload-666547" hosting pod "etcd-no-preload-666547" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-666547" has status "Ready":"False"
	I0116 03:44:05.767657  507339 pod_ready.go:81] duration metric: took 5.831478ms waiting for pod "etcd-no-preload-666547" in "kube-system" namespace to be "Ready" ...
	E0116 03:44:05.767669  507339 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-666547" hosting pod "etcd-no-preload-666547" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-666547" has status "Ready":"False"
	I0116 03:44:05.767677  507339 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-666547" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:05.772833  507339 pod_ready.go:97] node "no-preload-666547" hosting pod "kube-apiserver-no-preload-666547" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-666547" has status "Ready":"False"
	I0116 03:44:05.772863  507339 pod_ready.go:81] duration metric: took 5.17797ms waiting for pod "kube-apiserver-no-preload-666547" in "kube-system" namespace to be "Ready" ...
	E0116 03:44:05.772876  507339 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-666547" hosting pod "kube-apiserver-no-preload-666547" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-666547" has status "Ready":"False"
	I0116 03:44:05.772884  507339 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-666547" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:05.779234  507339 pod_ready.go:97] node "no-preload-666547" hosting pod "kube-controller-manager-no-preload-666547" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-666547" has status "Ready":"False"
	I0116 03:44:05.779259  507339 pod_ready.go:81] duration metric: took 6.362264ms waiting for pod "kube-controller-manager-no-preload-666547" in "kube-system" namespace to be "Ready" ...
	E0116 03:44:05.779270  507339 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-666547" hosting pod "kube-controller-manager-no-preload-666547" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-666547" has status "Ready":"False"
	I0116 03:44:05.779277  507339 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-dcmrn" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:06.175807  507339 pod_ready.go:97] node "no-preload-666547" hosting pod "kube-proxy-dcmrn" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-666547" has status "Ready":"False"
	I0116 03:44:06.175846  507339 pod_ready.go:81] duration metric: took 396.551923ms waiting for pod "kube-proxy-dcmrn" in "kube-system" namespace to be "Ready" ...
	E0116 03:44:06.175859  507339 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-666547" hosting pod "kube-proxy-dcmrn" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-666547" has status "Ready":"False"
	I0116 03:44:06.175867  507339 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-666547" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:06.580068  507339 pod_ready.go:97] node "no-preload-666547" hosting pod "kube-scheduler-no-preload-666547" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-666547" has status "Ready":"False"
	I0116 03:44:06.580102  507339 pod_ready.go:81] duration metric: took 404.226447ms waiting for pod "kube-scheduler-no-preload-666547" in "kube-system" namespace to be "Ready" ...
	E0116 03:44:06.580119  507339 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-666547" hosting pod "kube-scheduler-no-preload-666547" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-666547" has status "Ready":"False"
	I0116 03:44:06.580128  507339 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:06.976542  507339 pod_ready.go:97] node "no-preload-666547" hosting pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-666547" has status "Ready":"False"
	I0116 03:44:06.976573  507339 pod_ready.go:81] duration metric: took 396.432925ms waiting for pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace to be "Ready" ...
	E0116 03:44:06.976590  507339 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-666547" hosting pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-666547" has status "Ready":"False"
	I0116 03:44:06.976596  507339 pod_ready.go:38] duration metric: took 1.231281598s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:44:06.976621  507339 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0116 03:44:06.988884  507339 ops.go:34] apiserver oom_adj: -16
	I0116 03:44:06.988916  507339 kubeadm.go:640] restartCluster took 21.755069193s
	I0116 03:44:06.988940  507339 kubeadm.go:406] StartCluster complete in 21.811388098s
	I0116 03:44:06.988970  507339 settings.go:142] acquiring lock: {Name:mk3ded99c06d285b174e76622bb0741c8dd2d8fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:44:06.989066  507339 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17965-468241/kubeconfig
	I0116 03:44:06.990912  507339 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-468241/kubeconfig: {Name:mkac3c63bbcba9b4fa1bea3480797757d703fb9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:44:06.991191  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0116 03:44:06.991241  507339 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0116 03:44:06.991341  507339 addons.go:69] Setting storage-provisioner=true in profile "no-preload-666547"
	I0116 03:44:06.991362  507339 addons.go:234] Setting addon storage-provisioner=true in "no-preload-666547"
	I0116 03:44:06.991364  507339 addons.go:69] Setting default-storageclass=true in profile "no-preload-666547"
	W0116 03:44:06.991370  507339 addons.go:243] addon storage-provisioner should already be in state true
	I0116 03:44:06.991388  507339 addons.go:69] Setting metrics-server=true in profile "no-preload-666547"
	I0116 03:44:06.991397  507339 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-666547"
	I0116 03:44:06.991404  507339 addons.go:234] Setting addon metrics-server=true in "no-preload-666547"
	W0116 03:44:06.991412  507339 addons.go:243] addon metrics-server should already be in state true
	I0116 03:44:06.991438  507339 host.go:66] Checking if "no-preload-666547" exists ...
	I0116 03:44:06.991451  507339 host.go:66] Checking if "no-preload-666547" exists ...
	I0116 03:44:06.991460  507339 config.go:182] Loaded profile config "no-preload-666547": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0116 03:44:06.991855  507339 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:44:06.991855  507339 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:44:06.991893  507339 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:44:06.991858  507339 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:44:06.991940  507339 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:44:06.991976  507339 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:44:06.998037  507339 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-666547" context rescaled to 1 replicas
	I0116 03:44:06.998086  507339 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.103 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 03:44:07.000312  507339 out.go:177] * Verifying Kubernetes components...
	I0116 03:44:07.001889  507339 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:44:07.009057  507339 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41459
	I0116 03:44:07.009097  507339 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38693
	I0116 03:44:07.009596  507339 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:44:07.009735  507339 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:44:07.010178  507339 main.go:141] libmachine: Using API Version  1
	I0116 03:44:07.010195  507339 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:44:07.010368  507339 main.go:141] libmachine: Using API Version  1
	I0116 03:44:07.010392  507339 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:44:07.010412  507339 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33905
	I0116 03:44:07.010763  507339 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:44:07.010822  507339 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:44:07.010829  507339 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:44:07.010945  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetState
	I0116 03:44:07.011314  507339 main.go:141] libmachine: Using API Version  1
	I0116 03:44:07.011346  507339 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:44:07.011955  507339 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:44:07.011956  507339 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:44:07.012052  507339 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:44:07.012511  507339 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:44:07.012547  507339 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:44:07.015214  507339 addons.go:234] Setting addon default-storageclass=true in "no-preload-666547"
	W0116 03:44:07.015237  507339 addons.go:243] addon default-storageclass should already be in state true
	I0116 03:44:07.015269  507339 host.go:66] Checking if "no-preload-666547" exists ...
	I0116 03:44:07.015718  507339 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:44:07.015772  507339 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:44:07.029747  507339 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32807
	I0116 03:44:07.029990  507339 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35757
	I0116 03:44:07.030392  507339 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:44:07.030448  507339 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:44:07.030948  507339 main.go:141] libmachine: Using API Version  1
	I0116 03:44:07.030970  507339 main.go:141] libmachine: Using API Version  1
	I0116 03:44:07.030986  507339 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:44:07.031046  507339 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:44:07.031393  507339 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:44:07.031443  507339 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:44:07.031603  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetState
	I0116 03:44:07.031660  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetState
	I0116 03:44:07.033898  507339 main.go:141] libmachine: (no-preload-666547) Calling .DriverName
	I0116 03:44:07.033990  507339 main.go:141] libmachine: (no-preload-666547) Calling .DriverName
	I0116 03:44:07.036581  507339 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0116 03:44:07.034407  507339 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38687
	I0116 03:44:07.038382  507339 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0116 03:44:07.038420  507339 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0116 03:44:07.038444  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHHostname
	I0116 03:44:07.038499  507339 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:44:07.040190  507339 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 03:44:07.040211  507339 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0116 03:44:07.040232  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHHostname
	I0116 03:44:07.039010  507339 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:44:07.040908  507339 main.go:141] libmachine: Using API Version  1
	I0116 03:44:07.040931  507339 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:44:07.041538  507339 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:44:07.042268  507339 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:44:07.042319  507339 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:44:07.043270  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:44:07.043665  507339 main.go:141] libmachine: (no-preload-666547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:5f:03", ip: ""} in network mk-no-preload-666547: {Iface:virbr2 ExpiryTime:2024-01-16 04:43:15 +0000 UTC Type:0 Mac:52:54:00:4e:5f:03 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:no-preload-666547 Clientid:01:52:54:00:4e:5f:03}
	I0116 03:44:07.043697  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined IP address 192.168.39.103 and MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:44:07.043730  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:44:07.043966  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHPort
	I0116 03:44:07.044196  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHKeyPath
	I0116 03:44:07.044381  507339 main.go:141] libmachine: (no-preload-666547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:5f:03", ip: ""} in network mk-no-preload-666547: {Iface:virbr2 ExpiryTime:2024-01-16 04:43:15 +0000 UTC Type:0 Mac:52:54:00:4e:5f:03 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:no-preload-666547 Clientid:01:52:54:00:4e:5f:03}
	I0116 03:44:07.044422  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined IP address 192.168.39.103 and MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:44:07.044456  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHUsername
	I0116 03:44:07.044566  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHPort
	I0116 03:44:07.044691  507339 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/no-preload-666547/id_rsa Username:docker}
	I0116 03:44:07.044716  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHKeyPath
	I0116 03:44:07.044878  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHUsername
	I0116 03:44:07.045028  507339 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/no-preload-666547/id_rsa Username:docker}
	I0116 03:44:07.084507  507339 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46191
	I0116 03:44:07.085014  507339 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:44:07.085601  507339 main.go:141] libmachine: Using API Version  1
	I0116 03:44:07.085636  507339 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:44:07.086005  507339 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:44:07.086202  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetState
	I0116 03:44:07.088199  507339 main.go:141] libmachine: (no-preload-666547) Calling .DriverName
	I0116 03:44:07.088513  507339 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0116 03:44:07.088532  507339 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0116 03:44:07.088555  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHHostname
	I0116 03:44:07.092194  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:44:07.092719  507339 main.go:141] libmachine: (no-preload-666547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:5f:03", ip: ""} in network mk-no-preload-666547: {Iface:virbr2 ExpiryTime:2024-01-16 04:43:15 +0000 UTC Type:0 Mac:52:54:00:4e:5f:03 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:no-preload-666547 Clientid:01:52:54:00:4e:5f:03}
	I0116 03:44:07.092745  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined IP address 192.168.39.103 and MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:44:07.092953  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHPort
	I0116 03:44:07.093219  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHKeyPath
	I0116 03:44:07.093384  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHUsername
	I0116 03:44:07.093590  507339 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/no-preload-666547/id_rsa Username:docker}
	I0116 03:44:07.196191  507339 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0116 03:44:07.196219  507339 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0116 03:44:07.201036  507339 node_ready.go:35] waiting up to 6m0s for node "no-preload-666547" to be "Ready" ...
	I0116 03:44:07.201055  507339 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0116 03:44:07.222924  507339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0116 03:44:07.224548  507339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 03:44:07.237091  507339 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0116 03:44:07.237119  507339 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0116 03:44:07.289312  507339 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 03:44:07.289342  507339 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0116 03:44:07.334708  507339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 03:44:07.583740  507339 main.go:141] libmachine: Making call to close driver server
	I0116 03:44:07.583773  507339 main.go:141] libmachine: (no-preload-666547) Calling .Close
	I0116 03:44:07.584079  507339 main.go:141] libmachine: (no-preload-666547) DBG | Closing plugin on server side
	I0116 03:44:07.584135  507339 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:44:07.584146  507339 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:44:07.584155  507339 main.go:141] libmachine: Making call to close driver server
	I0116 03:44:07.584170  507339 main.go:141] libmachine: (no-preload-666547) Calling .Close
	I0116 03:44:07.584405  507339 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:44:07.584423  507339 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:44:07.592304  507339 main.go:141] libmachine: Making call to close driver server
	I0116 03:44:07.592332  507339 main.go:141] libmachine: (no-preload-666547) Calling .Close
	I0116 03:44:07.592608  507339 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:44:07.592656  507339 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:44:07.592663  507339 main.go:141] libmachine: (no-preload-666547) DBG | Closing plugin on server side
	I0116 03:44:08.290558  507339 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.065965685s)
	I0116 03:44:08.290643  507339 main.go:141] libmachine: Making call to close driver server
	I0116 03:44:08.290665  507339 main.go:141] libmachine: (no-preload-666547) Calling .Close
	I0116 03:44:08.291042  507339 main.go:141] libmachine: (no-preload-666547) DBG | Closing plugin on server side
	I0116 03:44:08.291103  507339 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:44:08.291121  507339 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:44:08.291136  507339 main.go:141] libmachine: Making call to close driver server
	I0116 03:44:08.291147  507339 main.go:141] libmachine: (no-preload-666547) Calling .Close
	I0116 03:44:08.291380  507339 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:44:08.291396  507339 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:44:08.291416  507339 main.go:141] libmachine: (no-preload-666547) DBG | Closing plugin on server side
	I0116 03:44:08.468146  507339 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.133348135s)
	I0116 03:44:08.468223  507339 main.go:141] libmachine: Making call to close driver server
	I0116 03:44:08.468248  507339 main.go:141] libmachine: (no-preload-666547) Calling .Close
	I0116 03:44:08.470360  507339 main.go:141] libmachine: (no-preload-666547) DBG | Closing plugin on server side
	I0116 03:44:08.470367  507339 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:44:08.470397  507339 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:44:08.470412  507339 main.go:141] libmachine: Making call to close driver server
	I0116 03:44:08.470423  507339 main.go:141] libmachine: (no-preload-666547) Calling .Close
	I0116 03:44:08.470734  507339 main.go:141] libmachine: (no-preload-666547) DBG | Closing plugin on server side
	I0116 03:44:08.470749  507339 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:44:08.470764  507339 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:44:08.470776  507339 addons.go:470] Verifying addon metrics-server=true in "no-preload-666547"
	I0116 03:44:08.473092  507339 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0116 03:44:04.697359  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:44:04.697510  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:04.714690  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:05.197225  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:44:05.197333  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:05.213923  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:05.696541  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:44:05.696632  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:05.713744  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:06.197249  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:44:06.197369  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:06.209148  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:06.696967  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:44:06.697083  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:06.709624  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:06.709656  507510 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0116 03:44:06.709665  507510 kubeadm.go:1135] stopping kube-system containers ...
	I0116 03:44:06.709676  507510 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0116 03:44:06.709736  507510 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 03:44:06.753286  507510 cri.go:89] found id: ""
	I0116 03:44:06.753370  507510 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0116 03:44:06.769990  507510 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 03:44:06.781090  507510 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 03:44:06.781168  507510 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 03:44:06.790936  507510 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0116 03:44:06.790971  507510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:44:06.915790  507510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:44:08.112494  507510 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.196668404s)
	I0116 03:44:08.112528  507510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:44:08.328365  507510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:44:08.435410  507510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:44:08.576950  507510 api_server.go:52] waiting for apiserver process to appear ...
	I0116 03:44:08.577077  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:44:09.077263  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:44:08.474544  507339 addons.go:505] enable addons completed in 1.483307386s: enabled=[default-storageclass storage-provisioner metrics-server]
	I0116 03:44:09.206584  507339 node_ready.go:58] node "no-preload-666547" has status "Ready":"False"
	I0116 03:44:10.997580  507257 start.go:369] acquired machines lock for "embed-certs-615980" in 1m2.194717115s
	I0116 03:44:10.997669  507257 start.go:96] Skipping create...Using existing machine configuration
	I0116 03:44:10.997681  507257 fix.go:54] fixHost starting: 
	I0116 03:44:10.998101  507257 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:44:10.998135  507257 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:44:11.017060  507257 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38343
	I0116 03:44:11.017687  507257 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:44:11.018295  507257 main.go:141] libmachine: Using API Version  1
	I0116 03:44:11.018326  507257 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:44:11.018673  507257 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:44:11.018879  507257 main.go:141] libmachine: (embed-certs-615980) Calling .DriverName
	I0116 03:44:11.019056  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetState
	I0116 03:44:11.021360  507257 fix.go:102] recreateIfNeeded on embed-certs-615980: state=Stopped err=<nil>
	I0116 03:44:11.021396  507257 main.go:141] libmachine: (embed-certs-615980) Calling .DriverName
	W0116 03:44:11.021577  507257 fix.go:128] unexpected machine state, will restart: <nil>
	I0116 03:44:11.023462  507257 out.go:177] * Restarting existing kvm2 VM for "embed-certs-615980" ...
	I0116 03:44:11.025158  507257 main.go:141] libmachine: (embed-certs-615980) Calling .Start
	I0116 03:44:11.025397  507257 main.go:141] libmachine: (embed-certs-615980) Ensuring networks are active...
	I0116 03:44:11.026354  507257 main.go:141] libmachine: (embed-certs-615980) Ensuring network default is active
	I0116 03:44:11.026830  507257 main.go:141] libmachine: (embed-certs-615980) Ensuring network mk-embed-certs-615980 is active
	I0116 03:44:11.027263  507257 main.go:141] libmachine: (embed-certs-615980) Getting domain xml...
	I0116 03:44:11.028182  507257 main.go:141] libmachine: (embed-certs-615980) Creating domain...
	I0116 03:44:09.198824  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:09.199284  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Found IP for machine: 192.168.50.236
	I0116 03:44:09.199318  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Reserving static IP address...
	I0116 03:44:09.199348  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has current primary IP address 192.168.50.236 and MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:09.199756  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-434445", mac: "52:54:00:78:ea:d5", ip: "192.168.50.236"} in network mk-default-k8s-diff-port-434445: {Iface:virbr1 ExpiryTime:2024-01-16 04:44:01 +0000 UTC Type:0 Mac:52:54:00:78:ea:d5 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:default-k8s-diff-port-434445 Clientid:01:52:54:00:78:ea:d5}
	I0116 03:44:09.199781  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | skip adding static IP to network mk-default-k8s-diff-port-434445 - found existing host DHCP lease matching {name: "default-k8s-diff-port-434445", mac: "52:54:00:78:ea:d5", ip: "192.168.50.236"}
	I0116 03:44:09.199794  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Reserved static IP address: 192.168.50.236
	I0116 03:44:09.199808  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for SSH to be available...
	I0116 03:44:09.199832  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | Getting to WaitForSSH function...
	I0116 03:44:09.202093  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:09.202494  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ea:d5", ip: ""} in network mk-default-k8s-diff-port-434445: {Iface:virbr1 ExpiryTime:2024-01-16 04:44:01 +0000 UTC Type:0 Mac:52:54:00:78:ea:d5 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:default-k8s-diff-port-434445 Clientid:01:52:54:00:78:ea:d5}
	I0116 03:44:09.202529  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined IP address 192.168.50.236 and MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:09.202664  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | Using SSH client type: external
	I0116 03:44:09.202690  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | Using SSH private key: /home/jenkins/minikube-integration/17965-468241/.minikube/machines/default-k8s-diff-port-434445/id_rsa (-rw-------)
	I0116 03:44:09.202723  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.236 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17965-468241/.minikube/machines/default-k8s-diff-port-434445/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0116 03:44:09.202746  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | About to run SSH command:
	I0116 03:44:09.202763  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | exit 0
	I0116 03:44:09.302425  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | SSH cmd err, output: <nil>: 
	I0116 03:44:09.302867  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetConfigRaw
	I0116 03:44:09.303666  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetIP
	I0116 03:44:09.306482  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:09.306884  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ea:d5", ip: ""} in network mk-default-k8s-diff-port-434445: {Iface:virbr1 ExpiryTime:2024-01-16 04:44:01 +0000 UTC Type:0 Mac:52:54:00:78:ea:d5 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:default-k8s-diff-port-434445 Clientid:01:52:54:00:78:ea:d5}
	I0116 03:44:09.306920  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined IP address 192.168.50.236 and MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:09.307189  507889 profile.go:148] Saving config to /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/default-k8s-diff-port-434445/config.json ...
	I0116 03:44:09.307418  507889 machine.go:88] provisioning docker machine ...
	I0116 03:44:09.307437  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .DriverName
	I0116 03:44:09.307673  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetMachineName
	I0116 03:44:09.307865  507889 buildroot.go:166] provisioning hostname "default-k8s-diff-port-434445"
	I0116 03:44:09.307886  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetMachineName
	I0116 03:44:09.308073  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHHostname
	I0116 03:44:09.310375  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:09.310726  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ea:d5", ip: ""} in network mk-default-k8s-diff-port-434445: {Iface:virbr1 ExpiryTime:2024-01-16 04:44:01 +0000 UTC Type:0 Mac:52:54:00:78:ea:d5 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:default-k8s-diff-port-434445 Clientid:01:52:54:00:78:ea:d5}
	I0116 03:44:09.310765  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined IP address 192.168.50.236 and MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:09.310920  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHPort
	I0116 03:44:09.311111  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHKeyPath
	I0116 03:44:09.311231  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHKeyPath
	I0116 03:44:09.311384  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHUsername
	I0116 03:44:09.311528  507889 main.go:141] libmachine: Using SSH client type: native
	I0116 03:44:09.311932  507889 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.50.236 22 <nil> <nil>}
	I0116 03:44:09.311949  507889 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-434445 && echo "default-k8s-diff-port-434445" | sudo tee /etc/hostname
	I0116 03:44:09.469340  507889 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-434445
	
	I0116 03:44:09.469384  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHHostname
	I0116 03:44:09.472788  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:09.473108  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ea:d5", ip: ""} in network mk-default-k8s-diff-port-434445: {Iface:virbr1 ExpiryTime:2024-01-16 04:44:01 +0000 UTC Type:0 Mac:52:54:00:78:ea:d5 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:default-k8s-diff-port-434445 Clientid:01:52:54:00:78:ea:d5}
	I0116 03:44:09.473166  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined IP address 192.168.50.236 and MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:09.473353  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHPort
	I0116 03:44:09.473571  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHKeyPath
	I0116 03:44:09.473768  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHKeyPath
	I0116 03:44:09.473963  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHUsername
	I0116 03:44:09.474171  507889 main.go:141] libmachine: Using SSH client type: native
	I0116 03:44:09.474626  507889 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.50.236 22 <nil> <nil>}
	I0116 03:44:09.474657  507889 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-434445' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-434445/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-434445' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 03:44:09.622177  507889 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 03:44:09.622223  507889 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17965-468241/.minikube CaCertPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17965-468241/.minikube}
	I0116 03:44:09.622253  507889 buildroot.go:174] setting up certificates
	I0116 03:44:09.622267  507889 provision.go:83] configureAuth start
	I0116 03:44:09.622280  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetMachineName
	I0116 03:44:09.622649  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetIP
	I0116 03:44:09.625970  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:09.626394  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ea:d5", ip: ""} in network mk-default-k8s-diff-port-434445: {Iface:virbr1 ExpiryTime:2024-01-16 04:44:01 +0000 UTC Type:0 Mac:52:54:00:78:ea:d5 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:default-k8s-diff-port-434445 Clientid:01:52:54:00:78:ea:d5}
	I0116 03:44:09.626429  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined IP address 192.168.50.236 and MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:09.626603  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHHostname
	I0116 03:44:09.629623  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:09.630022  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ea:d5", ip: ""} in network mk-default-k8s-diff-port-434445: {Iface:virbr1 ExpiryTime:2024-01-16 04:44:01 +0000 UTC Type:0 Mac:52:54:00:78:ea:d5 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:default-k8s-diff-port-434445 Clientid:01:52:54:00:78:ea:d5}
	I0116 03:44:09.630052  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined IP address 192.168.50.236 and MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:09.630263  507889 provision.go:138] copyHostCerts
	I0116 03:44:09.630354  507889 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem, removing ...
	I0116 03:44:09.630370  507889 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem
	I0116 03:44:09.630449  507889 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem (1078 bytes)
	I0116 03:44:09.630603  507889 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem, removing ...
	I0116 03:44:09.630626  507889 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem
	I0116 03:44:09.630661  507889 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem (1123 bytes)
	I0116 03:44:09.630760  507889 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem, removing ...
	I0116 03:44:09.630775  507889 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem
	I0116 03:44:09.630805  507889 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem (1679 bytes)
	I0116 03:44:09.630891  507889 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-434445 san=[192.168.50.236 192.168.50.236 localhost 127.0.0.1 minikube default-k8s-diff-port-434445]
	I0116 03:44:10.127058  507889 provision.go:172] copyRemoteCerts
	I0116 03:44:10.127138  507889 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 03:44:10.127175  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHHostname
	I0116 03:44:10.130572  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:10.131095  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ea:d5", ip: ""} in network mk-default-k8s-diff-port-434445: {Iface:virbr1 ExpiryTime:2024-01-16 04:44:01 +0000 UTC Type:0 Mac:52:54:00:78:ea:d5 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:default-k8s-diff-port-434445 Clientid:01:52:54:00:78:ea:d5}
	I0116 03:44:10.131133  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined IP address 192.168.50.236 and MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:10.131313  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHPort
	I0116 03:44:10.131590  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHKeyPath
	I0116 03:44:10.131825  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHUsername
	I0116 03:44:10.132001  507889 sshutil.go:53] new ssh client: &{IP:192.168.50.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/default-k8s-diff-port-434445/id_rsa Username:docker}
	I0116 03:44:10.238263  507889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0116 03:44:10.269567  507889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0116 03:44:10.295065  507889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0116 03:44:10.323347  507889 provision.go:86] duration metric: configureAuth took 701.062063ms
	I0116 03:44:10.323391  507889 buildroot.go:189] setting minikube options for container-runtime
	I0116 03:44:10.323667  507889 config.go:182] Loaded profile config "default-k8s-diff-port-434445": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 03:44:10.323774  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHHostname
	I0116 03:44:10.326825  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:10.327222  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ea:d5", ip: ""} in network mk-default-k8s-diff-port-434445: {Iface:virbr1 ExpiryTime:2024-01-16 04:44:01 +0000 UTC Type:0 Mac:52:54:00:78:ea:d5 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:default-k8s-diff-port-434445 Clientid:01:52:54:00:78:ea:d5}
	I0116 03:44:10.327266  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined IP address 192.168.50.236 and MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:10.327423  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHPort
	I0116 03:44:10.327682  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHKeyPath
	I0116 03:44:10.327883  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHKeyPath
	I0116 03:44:10.328077  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHUsername
	I0116 03:44:10.328269  507889 main.go:141] libmachine: Using SSH client type: native
	I0116 03:44:10.328743  507889 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.50.236 22 <nil> <nil>}
	I0116 03:44:10.328778  507889 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 03:44:10.700188  507889 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 03:44:10.700221  507889 machine.go:91] provisioned docker machine in 1.392790129s
	I0116 03:44:10.700232  507889 start.go:300] post-start starting for "default-k8s-diff-port-434445" (driver="kvm2")
	I0116 03:44:10.700244  507889 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 03:44:10.700261  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .DriverName
	I0116 03:44:10.700745  507889 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 03:44:10.700786  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHHostname
	I0116 03:44:10.704466  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:10.705001  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ea:d5", ip: ""} in network mk-default-k8s-diff-port-434445: {Iface:virbr1 ExpiryTime:2024-01-16 04:44:01 +0000 UTC Type:0 Mac:52:54:00:78:ea:d5 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:default-k8s-diff-port-434445 Clientid:01:52:54:00:78:ea:d5}
	I0116 03:44:10.705045  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined IP address 192.168.50.236 and MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:10.705278  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHPort
	I0116 03:44:10.705503  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHKeyPath
	I0116 03:44:10.705735  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHUsername
	I0116 03:44:10.705912  507889 sshutil.go:53] new ssh client: &{IP:192.168.50.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/default-k8s-diff-port-434445/id_rsa Username:docker}
	I0116 03:44:10.807625  507889 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 03:44:10.813392  507889 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 03:44:10.813428  507889 filesync.go:126] Scanning /home/jenkins/minikube-integration/17965-468241/.minikube/addons for local assets ...
	I0116 03:44:10.813519  507889 filesync.go:126] Scanning /home/jenkins/minikube-integration/17965-468241/.minikube/files for local assets ...
	I0116 03:44:10.813596  507889 filesync.go:149] local asset: /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem -> 4754782.pem in /etc/ssl/certs
	I0116 03:44:10.813687  507889 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 03:44:10.824028  507889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem --> /etc/ssl/certs/4754782.pem (1708 bytes)
	I0116 03:44:10.853453  507889 start.go:303] post-start completed in 153.201453ms
	I0116 03:44:10.853493  507889 fix.go:56] fixHost completed within 22.144172966s
	I0116 03:44:10.853543  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHHostname
	I0116 03:44:10.856529  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:10.856907  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ea:d5", ip: ""} in network mk-default-k8s-diff-port-434445: {Iface:virbr1 ExpiryTime:2024-01-16 04:44:01 +0000 UTC Type:0 Mac:52:54:00:78:ea:d5 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:default-k8s-diff-port-434445 Clientid:01:52:54:00:78:ea:d5}
	I0116 03:44:10.856967  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined IP address 192.168.50.236 and MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:10.857185  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHPort
	I0116 03:44:10.857438  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHKeyPath
	I0116 03:44:10.857636  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHKeyPath
	I0116 03:44:10.857790  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHUsername
	I0116 03:44:10.857974  507889 main.go:141] libmachine: Using SSH client type: native
	I0116 03:44:10.858502  507889 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.50.236 22 <nil> <nil>}
	I0116 03:44:10.858528  507889 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0116 03:44:10.997398  507889 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705376650.933903671
	
	I0116 03:44:10.997426  507889 fix.go:206] guest clock: 1705376650.933903671
	I0116 03:44:10.997436  507889 fix.go:219] Guest: 2024-01-16 03:44:10.933903671 +0000 UTC Remote: 2024-01-16 03:44:10.853498317 +0000 UTC m=+234.302480786 (delta=80.405354ms)
	I0116 03:44:10.997464  507889 fix.go:190] guest clock delta is within tolerance: 80.405354ms
	I0116 03:44:10.997471  507889 start.go:83] releasing machines lock for "default-k8s-diff-port-434445", held for 22.288188395s
	I0116 03:44:10.997517  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .DriverName
	I0116 03:44:10.997857  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetIP
	I0116 03:44:11.001410  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:11.001814  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ea:d5", ip: ""} in network mk-default-k8s-diff-port-434445: {Iface:virbr1 ExpiryTime:2024-01-16 04:44:01 +0000 UTC Type:0 Mac:52:54:00:78:ea:d5 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:default-k8s-diff-port-434445 Clientid:01:52:54:00:78:ea:d5}
	I0116 03:44:11.001864  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined IP address 192.168.50.236 and MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:11.002016  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .DriverName
	I0116 03:44:11.002649  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .DriverName
	I0116 03:44:11.002923  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .DriverName
	I0116 03:44:11.003015  507889 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 03:44:11.003068  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHHostname
	I0116 03:44:11.003258  507889 ssh_runner.go:195] Run: cat /version.json
	I0116 03:44:11.003294  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHHostname
	I0116 03:44:11.006383  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:11.006699  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:11.006800  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ea:d5", ip: ""} in network mk-default-k8s-diff-port-434445: {Iface:virbr1 ExpiryTime:2024-01-16 04:44:01 +0000 UTC Type:0 Mac:52:54:00:78:ea:d5 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:default-k8s-diff-port-434445 Clientid:01:52:54:00:78:ea:d5}
	I0116 03:44:11.006850  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined IP address 192.168.50.236 and MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:11.007123  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHPort
	I0116 03:44:11.007230  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ea:d5", ip: ""} in network mk-default-k8s-diff-port-434445: {Iface:virbr1 ExpiryTime:2024-01-16 04:44:01 +0000 UTC Type:0 Mac:52:54:00:78:ea:d5 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:default-k8s-diff-port-434445 Clientid:01:52:54:00:78:ea:d5}
	I0116 03:44:11.007330  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined IP address 192.168.50.236 and MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:11.007353  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHKeyPath
	I0116 03:44:11.007378  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHPort
	I0116 03:44:11.007585  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHUsername
	I0116 03:44:11.007597  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHKeyPath
	I0116 03:44:11.007737  507889 sshutil.go:53] new ssh client: &{IP:192.168.50.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/default-k8s-diff-port-434445/id_rsa Username:docker}
	I0116 03:44:11.007795  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHUsername
	I0116 03:44:11.007980  507889 sshutil.go:53] new ssh client: &{IP:192.168.50.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/default-k8s-diff-port-434445/id_rsa Username:docker}
	I0116 03:44:11.139882  507889 ssh_runner.go:195] Run: systemctl --version
	I0116 03:44:11.147082  507889 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 03:44:11.317582  507889 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0116 03:44:11.324567  507889 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 03:44:11.324656  507889 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 03:44:11.348193  507889 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0116 03:44:11.348225  507889 start.go:475] detecting cgroup driver to use...
	I0116 03:44:11.348319  507889 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 03:44:11.367049  507889 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 03:44:11.386632  507889 docker.go:217] disabling cri-docker service (if available) ...
	I0116 03:44:11.386713  507889 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 03:44:11.409551  507889 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 03:44:11.424599  507889 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 03:44:11.586480  507889 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 03:44:11.733770  507889 docker.go:233] disabling docker service ...
	I0116 03:44:11.733855  507889 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 03:44:11.751184  507889 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 03:44:11.766970  507889 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 03:44:11.903645  507889 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 03:44:12.017100  507889 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 03:44:12.031725  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 03:44:12.052091  507889 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0116 03:44:12.052179  507889 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:44:12.063115  507889 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 03:44:12.063219  507889 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:44:12.073109  507889 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:44:12.083438  507889 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:44:12.095783  507889 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 03:44:12.107816  507889 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 03:44:12.117997  507889 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0116 03:44:12.118077  507889 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0116 03:44:12.132997  507889 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 03:44:12.145200  507889 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 03:44:12.266786  507889 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 03:44:12.460779  507889 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 03:44:12.460892  507889 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 03:44:12.469200  507889 start.go:543] Will wait 60s for crictl version
	I0116 03:44:12.469305  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:44:12.473761  507889 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 03:44:12.536262  507889 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0116 03:44:12.536382  507889 ssh_runner.go:195] Run: crio --version
	I0116 03:44:12.593212  507889 ssh_runner.go:195] Run: crio --version
	I0116 03:44:12.650197  507889 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0116 03:44:09.577389  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:44:10.077774  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:44:10.578076  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:44:10.613091  507510 api_server.go:72] duration metric: took 2.036140794s to wait for apiserver process to appear ...
	I0116 03:44:10.613124  507510 api_server.go:88] waiting for apiserver healthz status ...
	I0116 03:44:10.613148  507510 api_server.go:253] Checking apiserver healthz at https://192.168.61.167:8443/healthz ...
	I0116 03:44:11.706731  507339 node_ready.go:58] node "no-preload-666547" has status "Ready":"False"
	I0116 03:44:13.713926  507339 node_ready.go:49] node "no-preload-666547" has status "Ready":"True"
	I0116 03:44:13.713958  507339 node_ready.go:38] duration metric: took 6.512893933s waiting for node "no-preload-666547" to be "Ready" ...
	I0116 03:44:13.713972  507339 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:44:13.727930  507339 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-lr95b" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:14.740352  507339 pod_ready.go:92] pod "coredns-76f75df574-lr95b" in "kube-system" namespace has status "Ready":"True"
	I0116 03:44:14.740392  507339 pod_ready.go:81] duration metric: took 1.012371035s waiting for pod "coredns-76f75df574-lr95b" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:14.740408  507339 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-666547" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:11.442223  507257 main.go:141] libmachine: (embed-certs-615980) Waiting to get IP...
	I0116 03:44:11.443346  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:11.443787  507257 main.go:141] libmachine: (embed-certs-615980) DBG | unable to find current IP address of domain embed-certs-615980 in network mk-embed-certs-615980
	I0116 03:44:11.443851  507257 main.go:141] libmachine: (embed-certs-615980) DBG | I0116 03:44:11.443761  508731 retry.go:31] will retry after 306.7144ms: waiting for machine to come up
	I0116 03:44:11.752574  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:11.753186  507257 main.go:141] libmachine: (embed-certs-615980) DBG | unable to find current IP address of domain embed-certs-615980 in network mk-embed-certs-615980
	I0116 03:44:11.753217  507257 main.go:141] libmachine: (embed-certs-615980) DBG | I0116 03:44:11.753126  508731 retry.go:31] will retry after 270.011585ms: waiting for machine to come up
	I0116 03:44:12.024942  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:12.025507  507257 main.go:141] libmachine: (embed-certs-615980) DBG | unable to find current IP address of domain embed-certs-615980 in network mk-embed-certs-615980
	I0116 03:44:12.025548  507257 main.go:141] libmachine: (embed-certs-615980) DBG | I0116 03:44:12.025433  508731 retry.go:31] will retry after 328.680313ms: waiting for machine to come up
	I0116 03:44:12.355989  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:12.356557  507257 main.go:141] libmachine: (embed-certs-615980) DBG | unable to find current IP address of domain embed-certs-615980 in network mk-embed-certs-615980
	I0116 03:44:12.356582  507257 main.go:141] libmachine: (embed-certs-615980) DBG | I0116 03:44:12.356493  508731 retry.go:31] will retry after 598.194786ms: waiting for machine to come up
	I0116 03:44:12.956170  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:12.956754  507257 main.go:141] libmachine: (embed-certs-615980) DBG | unable to find current IP address of domain embed-certs-615980 in network mk-embed-certs-615980
	I0116 03:44:12.956782  507257 main.go:141] libmachine: (embed-certs-615980) DBG | I0116 03:44:12.956673  508731 retry.go:31] will retry after 713.891978ms: waiting for machine to come up
	I0116 03:44:13.672728  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:13.673741  507257 main.go:141] libmachine: (embed-certs-615980) DBG | unable to find current IP address of domain embed-certs-615980 in network mk-embed-certs-615980
	I0116 03:44:13.673772  507257 main.go:141] libmachine: (embed-certs-615980) DBG | I0116 03:44:13.673636  508731 retry.go:31] will retry after 789.579297ms: waiting for machine to come up
	I0116 03:44:14.464913  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:14.465532  507257 main.go:141] libmachine: (embed-certs-615980) DBG | unable to find current IP address of domain embed-certs-615980 in network mk-embed-certs-615980
	I0116 03:44:14.465567  507257 main.go:141] libmachine: (embed-certs-615980) DBG | I0116 03:44:14.465446  508731 retry.go:31] will retry after 744.319122ms: waiting for machine to come up
	I0116 03:44:15.211748  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:15.212356  507257 main.go:141] libmachine: (embed-certs-615980) DBG | unable to find current IP address of domain embed-certs-615980 in network mk-embed-certs-615980
	I0116 03:44:15.212389  507257 main.go:141] libmachine: (embed-certs-615980) DBG | I0116 03:44:15.212282  508731 retry.go:31] will retry after 1.231175582s: waiting for machine to come up
	I0116 03:44:12.652092  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetIP
	I0116 03:44:12.655815  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:12.656308  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ea:d5", ip: ""} in network mk-default-k8s-diff-port-434445: {Iface:virbr1 ExpiryTime:2024-01-16 04:44:01 +0000 UTC Type:0 Mac:52:54:00:78:ea:d5 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:default-k8s-diff-port-434445 Clientid:01:52:54:00:78:ea:d5}
	I0116 03:44:12.656383  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined IP address 192.168.50.236 and MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:12.656790  507889 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0116 03:44:12.661880  507889 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 03:44:12.677695  507889 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 03:44:12.677794  507889 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 03:44:12.731676  507889 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0116 03:44:12.731794  507889 ssh_runner.go:195] Run: which lz4
	I0116 03:44:12.736614  507889 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0116 03:44:12.741554  507889 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0116 03:44:12.741595  507889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0116 03:44:15.047223  507889 crio.go:444] Took 2.310653 seconds to copy over tarball
	I0116 03:44:15.047386  507889 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0116 03:44:15.614559  507510 api_server.go:269] stopped: https://192.168.61.167:8443/healthz: Get "https://192.168.61.167:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0116 03:44:15.614617  507510 api_server.go:253] Checking apiserver healthz at https://192.168.61.167:8443/healthz ...
	I0116 03:44:16.992197  507510 api_server.go:279] https://192.168.61.167:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 03:44:16.992236  507510 api_server.go:103] status: https://192.168.61.167:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 03:44:16.992255  507510 api_server.go:253] Checking apiserver healthz at https://192.168.61.167:8443/healthz ...
	I0116 03:44:17.098327  507510 api_server.go:279] https://192.168.61.167:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 03:44:17.098365  507510 api_server.go:103] status: https://192.168.61.167:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 03:44:17.113518  507510 api_server.go:253] Checking apiserver healthz at https://192.168.61.167:8443/healthz ...
	I0116 03:44:17.133276  507510 api_server.go:279] https://192.168.61.167:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W0116 03:44:17.133308  507510 api_server.go:103] status: https://192.168.61.167:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I0116 03:44:17.613843  507510 api_server.go:253] Checking apiserver healthz at https://192.168.61.167:8443/healthz ...
	I0116 03:44:17.621074  507510 api_server.go:279] https://192.168.61.167:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0116 03:44:17.621131  507510 api_server.go:103] status: https://192.168.61.167:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0116 03:44:18.113648  507510 api_server.go:253] Checking apiserver healthz at https://192.168.61.167:8443/healthz ...
	I0116 03:44:18.936452  507510 api_server.go:279] https://192.168.61.167:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0116 03:44:18.936492  507510 api_server.go:103] status: https://192.168.61.167:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0116 03:44:18.936521  507510 api_server.go:253] Checking apiserver healthz at https://192.168.61.167:8443/healthz ...
	I0116 03:44:19.466220  507510 api_server.go:279] https://192.168.61.167:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0116 03:44:19.466259  507510 api_server.go:103] status: https://192.168.61.167:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0116 03:44:19.466278  507510 api_server.go:253] Checking apiserver healthz at https://192.168.61.167:8443/healthz ...
	I0116 03:44:16.750170  507339 pod_ready.go:102] pod "etcd-no-preload-666547" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:19.438168  507339 pod_ready.go:92] pod "etcd-no-preload-666547" in "kube-system" namespace has status "Ready":"True"
	I0116 03:44:19.438207  507339 pod_ready.go:81] duration metric: took 4.697789344s waiting for pod "etcd-no-preload-666547" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:19.438224  507339 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-666547" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:19.445845  507339 pod_ready.go:92] pod "kube-apiserver-no-preload-666547" in "kube-system" namespace has status "Ready":"True"
	I0116 03:44:19.445875  507339 pod_ready.go:81] duration metric: took 7.641191ms waiting for pod "kube-apiserver-no-preload-666547" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:19.445889  507339 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-666547" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:19.452468  507339 pod_ready.go:92] pod "kube-controller-manager-no-preload-666547" in "kube-system" namespace has status "Ready":"True"
	I0116 03:44:19.452491  507339 pod_ready.go:81] duration metric: took 6.593311ms waiting for pod "kube-controller-manager-no-preload-666547" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:19.452500  507339 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dcmrn" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:19.459542  507339 pod_ready.go:92] pod "kube-proxy-dcmrn" in "kube-system" namespace has status "Ready":"True"
	I0116 03:44:19.459576  507339 pod_ready.go:81] duration metric: took 7.067817ms waiting for pod "kube-proxy-dcmrn" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:19.459591  507339 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-666547" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:19.966827  507339 pod_ready.go:92] pod "kube-scheduler-no-preload-666547" in "kube-system" namespace has status "Ready":"True"
	I0116 03:44:19.966867  507339 pod_ready.go:81] duration metric: took 507.26823ms waiting for pod "kube-scheduler-no-preload-666547" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:19.966878  507339 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:19.946145  507510 api_server.go:279] https://192.168.61.167:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0116 03:44:19.946209  507510 api_server.go:103] status: https://192.168.61.167:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0116 03:44:19.946230  507510 api_server.go:253] Checking apiserver healthz at https://192.168.61.167:8443/healthz ...
	I0116 03:44:20.259035  507510 api_server.go:279] https://192.168.61.167:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0116 03:44:20.259091  507510 api_server.go:103] status: https://192.168.61.167:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0116 03:44:20.259142  507510 api_server.go:253] Checking apiserver healthz at https://192.168.61.167:8443/healthz ...
	I0116 03:44:20.330196  507510 api_server.go:279] https://192.168.61.167:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0116 03:44:20.330237  507510 api_server.go:103] status: https://192.168.61.167:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0116 03:44:20.613624  507510 api_server.go:253] Checking apiserver healthz at https://192.168.61.167:8443/healthz ...
	I0116 03:44:20.621956  507510 api_server.go:279] https://192.168.61.167:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0116 03:44:20.622008  507510 api_server.go:103] status: https://192.168.61.167:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0116 03:44:21.113536  507510 api_server.go:253] Checking apiserver healthz at https://192.168.61.167:8443/healthz ...
	I0116 03:44:21.125326  507510 api_server.go:279] https://192.168.61.167:8443/healthz returned 200:
	ok
	I0116 03:44:21.137555  507510 api_server.go:141] control plane version: v1.16.0
	I0116 03:44:21.137602  507510 api_server.go:131] duration metric: took 10.524468396s to wait for apiserver health ...
	I0116 03:44:21.137616  507510 cni.go:84] Creating CNI manager for ""
	I0116 03:44:21.137625  507510 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:44:21.139682  507510 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 03:44:16.445685  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:16.446216  507257 main.go:141] libmachine: (embed-certs-615980) DBG | unable to find current IP address of domain embed-certs-615980 in network mk-embed-certs-615980
	I0116 03:44:16.446246  507257 main.go:141] libmachine: (embed-certs-615980) DBG | I0116 03:44:16.446137  508731 retry.go:31] will retry after 1.400972s: waiting for machine to come up
	I0116 03:44:17.848447  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:17.848964  507257 main.go:141] libmachine: (embed-certs-615980) DBG | unable to find current IP address of domain embed-certs-615980 in network mk-embed-certs-615980
	I0116 03:44:17.848991  507257 main.go:141] libmachine: (embed-certs-615980) DBG | I0116 03:44:17.848916  508731 retry.go:31] will retry after 2.293115324s: waiting for machine to come up
	I0116 03:44:20.145242  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:20.145899  507257 main.go:141] libmachine: (embed-certs-615980) DBG | unable to find current IP address of domain embed-certs-615980 in network mk-embed-certs-615980
	I0116 03:44:20.145933  507257 main.go:141] libmachine: (embed-certs-615980) DBG | I0116 03:44:20.145842  508731 retry.go:31] will retry after 2.158183619s: waiting for machine to come up
	I0116 03:44:18.744370  507889 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.696918616s)
	I0116 03:44:18.744426  507889 crio.go:451] Took 3.697118 seconds to extract the tarball
	I0116 03:44:18.744440  507889 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0116 03:44:18.792685  507889 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 03:44:18.868262  507889 crio.go:496] all images are preloaded for cri-o runtime.
	I0116 03:44:18.868291  507889 cache_images.go:84] Images are preloaded, skipping loading
	I0116 03:44:18.868382  507889 ssh_runner.go:195] Run: crio config
	I0116 03:44:18.954026  507889 cni.go:84] Creating CNI manager for ""
	I0116 03:44:18.954060  507889 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:44:18.954085  507889 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 03:44:18.954138  507889 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.236 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-434445 NodeName:default-k8s-diff-port-434445 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.236"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.236 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0116 03:44:18.954362  507889 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.236
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-434445"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.236
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.236"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 03:44:18.954483  507889 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-434445 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.236
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-434445 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0116 03:44:18.954557  507889 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0116 03:44:18.966046  507889 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 03:44:18.966143  507889 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0116 03:44:18.977441  507889 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I0116 03:44:18.997304  507889 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0116 03:44:19.016597  507889 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I0116 03:44:19.035635  507889 ssh_runner.go:195] Run: grep 192.168.50.236	control-plane.minikube.internal$ /etc/hosts
	I0116 03:44:19.039882  507889 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.236	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 03:44:19.053342  507889 certs.go:56] Setting up /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/default-k8s-diff-port-434445 for IP: 192.168.50.236
	I0116 03:44:19.053383  507889 certs.go:190] acquiring lock for shared ca certs: {Name:mkab5aeea3e0ed69f3592dc8f418e07c6075b130 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:44:19.053580  507889 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17965-468241/.minikube/ca.key
	I0116 03:44:19.053655  507889 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.key
	I0116 03:44:19.053773  507889 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/default-k8s-diff-port-434445/client.key
	I0116 03:44:19.053920  507889 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/default-k8s-diff-port-434445/apiserver.key.4e4dee8d
	I0116 03:44:19.053994  507889 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/default-k8s-diff-port-434445/proxy-client.key
	I0116 03:44:19.054154  507889 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478.pem (1338 bytes)
	W0116 03:44:19.054198  507889 certs.go:433] ignoring /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478_empty.pem, impossibly tiny 0 bytes
	I0116 03:44:19.054215  507889 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca-key.pem (1679 bytes)
	I0116 03:44:19.054249  507889 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem (1078 bytes)
	I0116 03:44:19.054286  507889 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem (1123 bytes)
	I0116 03:44:19.054318  507889 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem (1679 bytes)
	I0116 03:44:19.054373  507889 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem (1708 bytes)
	I0116 03:44:19.055259  507889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/default-k8s-diff-port-434445/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0116 03:44:19.086636  507889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/default-k8s-diff-port-434445/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0116 03:44:19.117759  507889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/default-k8s-diff-port-434445/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0116 03:44:19.144530  507889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/default-k8s-diff-port-434445/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0116 03:44:19.170423  507889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 03:44:19.198224  507889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0116 03:44:19.223514  507889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 03:44:19.250858  507889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 03:44:19.276922  507889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem --> /usr/share/ca-certificates/4754782.pem (1708 bytes)
	I0116 03:44:19.302621  507889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 03:44:19.330021  507889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478.pem --> /usr/share/ca-certificates/475478.pem (1338 bytes)
	I0116 03:44:19.358108  507889 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0116 03:44:19.379126  507889 ssh_runner.go:195] Run: openssl version
	I0116 03:44:19.386675  507889 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 03:44:19.398759  507889 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:44:19.404201  507889 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 02:35 /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:44:19.404283  507889 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:44:19.411067  507889 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 03:44:19.422608  507889 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/475478.pem && ln -fs /usr/share/ca-certificates/475478.pem /etc/ssl/certs/475478.pem"
	I0116 03:44:19.434422  507889 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/475478.pem
	I0116 03:44:19.440018  507889 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 02:44 /usr/share/ca-certificates/475478.pem
	I0116 03:44:19.440103  507889 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/475478.pem
	I0116 03:44:19.446469  507889 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/475478.pem /etc/ssl/certs/51391683.0"
	I0116 03:44:19.460130  507889 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4754782.pem && ln -fs /usr/share/ca-certificates/4754782.pem /etc/ssl/certs/4754782.pem"
	I0116 03:44:19.473886  507889 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4754782.pem
	I0116 03:44:19.478781  507889 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 02:44 /usr/share/ca-certificates/4754782.pem
	I0116 03:44:19.478858  507889 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4754782.pem
	I0116 03:44:19.484826  507889 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4754782.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 03:44:19.495710  507889 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 03:44:19.500842  507889 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0116 03:44:19.507646  507889 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0116 03:44:19.515247  507889 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0116 03:44:19.523964  507889 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0116 03:44:19.532379  507889 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0116 03:44:19.540067  507889 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0116 03:44:19.548614  507889 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-434445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-434445 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.50.236 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 03:44:19.548812  507889 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0116 03:44:19.548900  507889 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 03:44:19.595803  507889 cri.go:89] found id: ""
	I0116 03:44:19.595910  507889 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0116 03:44:19.610615  507889 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0116 03:44:19.610647  507889 kubeadm.go:636] restartCluster start
	I0116 03:44:19.610726  507889 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0116 03:44:19.624175  507889 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:19.625683  507889 kubeconfig.go:92] found "default-k8s-diff-port-434445" server: "https://192.168.50.236:8444"
	I0116 03:44:19.628685  507889 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0116 03:44:19.640309  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:19.640390  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:19.653938  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:20.141193  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:20.141285  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:20.154331  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:20.640562  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:20.640691  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:20.657774  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:21.141268  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:21.141371  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:21.158792  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:21.141315  507510 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 03:44:21.168450  507510 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 03:44:21.206907  507510 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 03:44:21.222998  507510 system_pods.go:59] 7 kube-system pods found
	I0116 03:44:21.223072  507510 system_pods.go:61] "coredns-5644d7b6d9-7q4wc" [003ba660-e3c5-4a98-be67-75e43dc32b37] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0116 03:44:21.223084  507510 system_pods.go:61] "etcd-old-k8s-version-696770" [b029f446-15b1-4720-af3a-b651b778fc0d] Running
	I0116 03:44:21.223094  507510 system_pods.go:61] "kube-apiserver-old-k8s-version-696770" [a9597e33-db8c-48e5-b119-d6d97d8d8e3f] Running
	I0116 03:44:21.223114  507510 system_pods.go:61] "kube-controller-manager-old-k8s-version-696770" [901fd518-04a1-4de0-baa2-08c7d57a587d] Running
	I0116 03:44:21.223123  507510 system_pods.go:61] "kube-proxy-9pfdj" [ac00ed93-abe8-4f53-8e63-fa63589fbf5c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0116 03:44:21.223134  507510 system_pods.go:61] "kube-scheduler-old-k8s-version-696770" [a8d74e76-6c22-4d82-b954-4025dff18279] Running
	I0116 03:44:21.223146  507510 system_pods.go:61] "storage-provisioner" [b04dacf9-8137-4f22-ae36-147d08fd9b60] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0116 03:44:21.223158  507510 system_pods.go:74] duration metric: took 16.220665ms to wait for pod list to return data ...
	I0116 03:44:21.223173  507510 node_conditions.go:102] verifying NodePressure condition ...
	I0116 03:44:21.228670  507510 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 03:44:21.228715  507510 node_conditions.go:123] node cpu capacity is 2
	I0116 03:44:21.228734  507510 node_conditions.go:105] duration metric: took 5.552228ms to run NodePressure ...
	I0116 03:44:21.228760  507510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:44:21.576565  507510 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0116 03:44:21.581017  507510 retry.go:31] will retry after 323.975879ms: kubelet not initialised
	I0116 03:44:21.914790  507510 retry.go:31] will retry after 258.393503ms: kubelet not initialised
	I0116 03:44:22.180592  507510 retry.go:31] will retry after 582.791922ms: kubelet not initialised
	I0116 03:44:22.769880  507510 retry.go:31] will retry after 961.779974ms: kubelet not initialised
	I0116 03:44:23.739015  507510 retry.go:31] will retry after 686.353156ms: kubelet not initialised
	I0116 03:44:24.431951  507510 retry.go:31] will retry after 2.073440094s: kubelet not initialised
	I0116 03:44:21.976301  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:23.977710  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:22.305212  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:22.305701  507257 main.go:141] libmachine: (embed-certs-615980) DBG | unable to find current IP address of domain embed-certs-615980 in network mk-embed-certs-615980
	I0116 03:44:22.305732  507257 main.go:141] libmachine: (embed-certs-615980) DBG | I0116 03:44:22.305662  508731 retry.go:31] will retry after 3.080436267s: waiting for machine to come up
	I0116 03:44:25.389414  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:25.389850  507257 main.go:141] libmachine: (embed-certs-615980) DBG | unable to find current IP address of domain embed-certs-615980 in network mk-embed-certs-615980
	I0116 03:44:25.389875  507257 main.go:141] libmachine: (embed-certs-615980) DBG | I0116 03:44:25.389828  508731 retry.go:31] will retry after 2.730339967s: waiting for machine to come up
	I0116 03:44:21.640823  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:21.641083  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:21.656391  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:22.141134  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:22.141242  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:22.157848  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:22.641247  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:22.641371  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:22.654425  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:23.140719  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:23.140827  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:23.153823  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:23.641193  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:23.641298  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:23.654061  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:24.141196  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:24.141290  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:24.161415  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:24.640416  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:24.640514  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:24.670258  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:25.140571  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:25.140673  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:25.157823  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:25.641188  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:25.641284  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:25.655917  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:26.141241  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:26.141357  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:26.157447  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:26.511961  507510 retry.go:31] will retry after 4.006598367s: kubelet not initialised
	I0116 03:44:26.473653  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:28.474914  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:28.122340  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:28.122704  507257 main.go:141] libmachine: (embed-certs-615980) DBG | unable to find current IP address of domain embed-certs-615980 in network mk-embed-certs-615980
	I0116 03:44:28.122735  507257 main.go:141] libmachine: (embed-certs-615980) DBG | I0116 03:44:28.122676  508731 retry.go:31] will retry after 4.170800657s: waiting for machine to come up
	I0116 03:44:26.641408  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:26.641510  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:26.654505  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:27.141033  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:27.141129  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:27.154208  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:27.640701  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:27.640785  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:27.653964  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:28.141330  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:28.141406  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:28.153419  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:28.640986  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:28.641076  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:28.654357  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:29.141250  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:29.141335  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:29.154899  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:29.640619  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:29.640717  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:29.654653  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:29.654692  507889 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0116 03:44:29.654701  507889 kubeadm.go:1135] stopping kube-system containers ...
	I0116 03:44:29.654713  507889 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0116 03:44:29.654769  507889 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 03:44:29.697617  507889 cri.go:89] found id: ""
	I0116 03:44:29.697719  507889 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0116 03:44:29.719069  507889 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 03:44:29.735791  507889 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 03:44:29.735872  507889 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 03:44:29.748788  507889 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0116 03:44:29.748823  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:44:29.874894  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:44:30.787232  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:44:31.009234  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:44:31.136220  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:44:31.215330  507889 api_server.go:52] waiting for apiserver process to appear ...
	I0116 03:44:31.215416  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:44:30.526372  507510 retry.go:31] will retry after 4.363756335s: kubelet not initialised
	I0116 03:44:32.295936  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:32.296442  507257 main.go:141] libmachine: (embed-certs-615980) Found IP for machine: 192.168.72.159
	I0116 03:44:32.296483  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has current primary IP address 192.168.72.159 and MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:32.296499  507257 main.go:141] libmachine: (embed-certs-615980) Reserving static IP address...
	I0116 03:44:32.297078  507257 main.go:141] libmachine: (embed-certs-615980) DBG | found host DHCP lease matching {name: "embed-certs-615980", mac: "52:54:00:d4:a6:40", ip: "192.168.72.159"} in network mk-embed-certs-615980: {Iface:virbr4 ExpiryTime:2024-01-16 04:44:24 +0000 UTC Type:0 Mac:52:54:00:d4:a6:40 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-615980 Clientid:01:52:54:00:d4:a6:40}
	I0116 03:44:32.297121  507257 main.go:141] libmachine: (embed-certs-615980) Reserved static IP address: 192.168.72.159
	I0116 03:44:32.297140  507257 main.go:141] libmachine: (embed-certs-615980) DBG | skip adding static IP to network mk-embed-certs-615980 - found existing host DHCP lease matching {name: "embed-certs-615980", mac: "52:54:00:d4:a6:40", ip: "192.168.72.159"}
	I0116 03:44:32.297160  507257 main.go:141] libmachine: (embed-certs-615980) DBG | Getting to WaitForSSH function...
	I0116 03:44:32.297179  507257 main.go:141] libmachine: (embed-certs-615980) Waiting for SSH to be available...
	I0116 03:44:32.299440  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:32.299839  507257 main.go:141] libmachine: (embed-certs-615980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:a6:40", ip: ""} in network mk-embed-certs-615980: {Iface:virbr4 ExpiryTime:2024-01-16 04:44:24 +0000 UTC Type:0 Mac:52:54:00:d4:a6:40 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-615980 Clientid:01:52:54:00:d4:a6:40}
	I0116 03:44:32.299870  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined IP address 192.168.72.159 and MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:32.300064  507257 main.go:141] libmachine: (embed-certs-615980) DBG | Using SSH client type: external
	I0116 03:44:32.300098  507257 main.go:141] libmachine: (embed-certs-615980) DBG | Using SSH private key: /home/jenkins/minikube-integration/17965-468241/.minikube/machines/embed-certs-615980/id_rsa (-rw-------)
	I0116 03:44:32.300133  507257 main.go:141] libmachine: (embed-certs-615980) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.159 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17965-468241/.minikube/machines/embed-certs-615980/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0116 03:44:32.300153  507257 main.go:141] libmachine: (embed-certs-615980) DBG | About to run SSH command:
	I0116 03:44:32.300172  507257 main.go:141] libmachine: (embed-certs-615980) DBG | exit 0
	I0116 03:44:32.396718  507257 main.go:141] libmachine: (embed-certs-615980) DBG | SSH cmd err, output: <nil>: 
	I0116 03:44:32.397111  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetConfigRaw
	I0116 03:44:32.397901  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetIP
	I0116 03:44:32.400997  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:32.401502  507257 main.go:141] libmachine: (embed-certs-615980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:a6:40", ip: ""} in network mk-embed-certs-615980: {Iface:virbr4 ExpiryTime:2024-01-16 04:44:24 +0000 UTC Type:0 Mac:52:54:00:d4:a6:40 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-615980 Clientid:01:52:54:00:d4:a6:40}
	I0116 03:44:32.401540  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined IP address 192.168.72.159 and MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:32.402036  507257 profile.go:148] Saving config to /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/embed-certs-615980/config.json ...
	I0116 03:44:32.402259  507257 machine.go:88] provisioning docker machine ...
	I0116 03:44:32.402281  507257 main.go:141] libmachine: (embed-certs-615980) Calling .DriverName
	I0116 03:44:32.402539  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetMachineName
	I0116 03:44:32.402759  507257 buildroot.go:166] provisioning hostname "embed-certs-615980"
	I0116 03:44:32.402786  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetMachineName
	I0116 03:44:32.402966  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHHostname
	I0116 03:44:32.405935  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:32.406344  507257 main.go:141] libmachine: (embed-certs-615980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:a6:40", ip: ""} in network mk-embed-certs-615980: {Iface:virbr4 ExpiryTime:2024-01-16 04:44:24 +0000 UTC Type:0 Mac:52:54:00:d4:a6:40 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-615980 Clientid:01:52:54:00:d4:a6:40}
	I0116 03:44:32.406384  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined IP address 192.168.72.159 and MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:32.406585  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHPort
	I0116 03:44:32.406821  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHKeyPath
	I0116 03:44:32.407054  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHKeyPath
	I0116 03:44:32.407219  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHUsername
	I0116 03:44:32.407399  507257 main.go:141] libmachine: Using SSH client type: native
	I0116 03:44:32.407754  507257 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I0116 03:44:32.407768  507257 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-615980 && echo "embed-certs-615980" | sudo tee /etc/hostname
	I0116 03:44:32.561584  507257 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-615980
	
	I0116 03:44:32.561618  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHHostname
	I0116 03:44:32.564566  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:32.565004  507257 main.go:141] libmachine: (embed-certs-615980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:a6:40", ip: ""} in network mk-embed-certs-615980: {Iface:virbr4 ExpiryTime:2024-01-16 04:44:24 +0000 UTC Type:0 Mac:52:54:00:d4:a6:40 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-615980 Clientid:01:52:54:00:d4:a6:40}
	I0116 03:44:32.565033  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined IP address 192.168.72.159 and MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:32.565232  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHPort
	I0116 03:44:32.565481  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHKeyPath
	I0116 03:44:32.565672  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHKeyPath
	I0116 03:44:32.565843  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHUsername
	I0116 03:44:32.566045  507257 main.go:141] libmachine: Using SSH client type: native
	I0116 03:44:32.566521  507257 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I0116 03:44:32.566549  507257 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-615980' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-615980/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-615980' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 03:44:32.718945  507257 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 03:44:32.719005  507257 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17965-468241/.minikube CaCertPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17965-468241/.minikube}
	I0116 03:44:32.719037  507257 buildroot.go:174] setting up certificates
	I0116 03:44:32.719051  507257 provision.go:83] configureAuth start
	I0116 03:44:32.719081  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetMachineName
	I0116 03:44:32.719397  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetIP
	I0116 03:44:32.722474  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:32.722938  507257 main.go:141] libmachine: (embed-certs-615980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:a6:40", ip: ""} in network mk-embed-certs-615980: {Iface:virbr4 ExpiryTime:2024-01-16 04:44:24 +0000 UTC Type:0 Mac:52:54:00:d4:a6:40 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-615980 Clientid:01:52:54:00:d4:a6:40}
	I0116 03:44:32.722972  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined IP address 192.168.72.159 and MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:32.723136  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHHostname
	I0116 03:44:32.725821  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:32.726246  507257 main.go:141] libmachine: (embed-certs-615980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:a6:40", ip: ""} in network mk-embed-certs-615980: {Iface:virbr4 ExpiryTime:2024-01-16 04:44:24 +0000 UTC Type:0 Mac:52:54:00:d4:a6:40 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-615980 Clientid:01:52:54:00:d4:a6:40}
	I0116 03:44:32.726277  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined IP address 192.168.72.159 and MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:32.726448  507257 provision.go:138] copyHostCerts
	I0116 03:44:32.726535  507257 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem, removing ...
	I0116 03:44:32.726622  507257 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem
	I0116 03:44:32.726769  507257 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem (1078 bytes)
	I0116 03:44:32.726971  507257 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem, removing ...
	I0116 03:44:32.726983  507257 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem
	I0116 03:44:32.727015  507257 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem (1123 bytes)
	I0116 03:44:32.727099  507257 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem, removing ...
	I0116 03:44:32.727116  507257 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem
	I0116 03:44:32.727144  507257 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem (1679 bytes)
	I0116 03:44:32.727212  507257 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca-key.pem org=jenkins.embed-certs-615980 san=[192.168.72.159 192.168.72.159 localhost 127.0.0.1 minikube embed-certs-615980]
	I0116 03:44:32.921694  507257 provision.go:172] copyRemoteCerts
	I0116 03:44:32.921764  507257 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 03:44:32.921798  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHHostname
	I0116 03:44:32.924951  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:32.925329  507257 main.go:141] libmachine: (embed-certs-615980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:a6:40", ip: ""} in network mk-embed-certs-615980: {Iface:virbr4 ExpiryTime:2024-01-16 04:44:24 +0000 UTC Type:0 Mac:52:54:00:d4:a6:40 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-615980 Clientid:01:52:54:00:d4:a6:40}
	I0116 03:44:32.925362  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined IP address 192.168.72.159 and MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:32.925534  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHPort
	I0116 03:44:32.925855  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHKeyPath
	I0116 03:44:32.926135  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHUsername
	I0116 03:44:32.926390  507257 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/embed-certs-615980/id_rsa Username:docker}
	I0116 03:44:33.025856  507257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0116 03:44:33.055403  507257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0116 03:44:33.087908  507257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0116 03:44:33.116847  507257 provision.go:86] duration metric: configureAuth took 397.777297ms
	I0116 03:44:33.116886  507257 buildroot.go:189] setting minikube options for container-runtime
	I0116 03:44:33.117136  507257 config.go:182] Loaded profile config "embed-certs-615980": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 03:44:33.117267  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHHostname
	I0116 03:44:33.120452  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:33.120915  507257 main.go:141] libmachine: (embed-certs-615980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:a6:40", ip: ""} in network mk-embed-certs-615980: {Iface:virbr4 ExpiryTime:2024-01-16 04:44:24 +0000 UTC Type:0 Mac:52:54:00:d4:a6:40 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-615980 Clientid:01:52:54:00:d4:a6:40}
	I0116 03:44:33.120949  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined IP address 192.168.72.159 and MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:33.121189  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHPort
	I0116 03:44:33.121442  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHKeyPath
	I0116 03:44:33.121636  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHKeyPath
	I0116 03:44:33.121778  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHUsername
	I0116 03:44:33.121966  507257 main.go:141] libmachine: Using SSH client type: native
	I0116 03:44:33.122333  507257 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I0116 03:44:33.122359  507257 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 03:44:33.486009  507257 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 03:44:33.486147  507257 machine.go:91] provisioned docker machine in 1.083869863s
	I0116 03:44:33.486202  507257 start.go:300] post-start starting for "embed-certs-615980" (driver="kvm2")
	I0116 03:44:33.486239  507257 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 03:44:33.486282  507257 main.go:141] libmachine: (embed-certs-615980) Calling .DriverName
	I0116 03:44:33.486719  507257 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 03:44:33.486755  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHHostname
	I0116 03:44:33.490226  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:33.490676  507257 main.go:141] libmachine: (embed-certs-615980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:a6:40", ip: ""} in network mk-embed-certs-615980: {Iface:virbr4 ExpiryTime:2024-01-16 04:44:24 +0000 UTC Type:0 Mac:52:54:00:d4:a6:40 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-615980 Clientid:01:52:54:00:d4:a6:40}
	I0116 03:44:33.490743  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined IP address 192.168.72.159 and MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:33.490863  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHPort
	I0116 03:44:33.491117  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHKeyPath
	I0116 03:44:33.491299  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHUsername
	I0116 03:44:33.491478  507257 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/embed-certs-615980/id_rsa Username:docker}
	I0116 03:44:33.590039  507257 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 03:44:33.596095  507257 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 03:44:33.596124  507257 filesync.go:126] Scanning /home/jenkins/minikube-integration/17965-468241/.minikube/addons for local assets ...
	I0116 03:44:33.596206  507257 filesync.go:126] Scanning /home/jenkins/minikube-integration/17965-468241/.minikube/files for local assets ...
	I0116 03:44:33.596295  507257 filesync.go:149] local asset: /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem -> 4754782.pem in /etc/ssl/certs
	I0116 03:44:33.596437  507257 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 03:44:33.609260  507257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem --> /etc/ssl/certs/4754782.pem (1708 bytes)
	I0116 03:44:33.642578  507257 start.go:303] post-start completed in 156.336318ms
	I0116 03:44:33.642651  507257 fix.go:56] fixHost completed within 22.644969219s
	I0116 03:44:33.642703  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHHostname
	I0116 03:44:33.645616  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:33.645988  507257 main.go:141] libmachine: (embed-certs-615980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:a6:40", ip: ""} in network mk-embed-certs-615980: {Iface:virbr4 ExpiryTime:2024-01-16 04:44:24 +0000 UTC Type:0 Mac:52:54:00:d4:a6:40 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-615980 Clientid:01:52:54:00:d4:a6:40}
	I0116 03:44:33.646017  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined IP address 192.168.72.159 and MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:33.646277  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHPort
	I0116 03:44:33.646514  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHKeyPath
	I0116 03:44:33.646720  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHKeyPath
	I0116 03:44:33.646910  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHUsername
	I0116 03:44:33.647179  507257 main.go:141] libmachine: Using SSH client type: native
	I0116 03:44:33.647655  507257 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I0116 03:44:33.647682  507257 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0116 03:44:33.781805  507257 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705376673.706960834
	
	I0116 03:44:33.781839  507257 fix.go:206] guest clock: 1705376673.706960834
	I0116 03:44:33.781850  507257 fix.go:219] Guest: 2024-01-16 03:44:33.706960834 +0000 UTC Remote: 2024-01-16 03:44:33.642657737 +0000 UTC m=+367.429386706 (delta=64.303097ms)
	I0116 03:44:33.781879  507257 fix.go:190] guest clock delta is within tolerance: 64.303097ms
	I0116 03:44:33.781890  507257 start.go:83] releasing machines lock for "embed-certs-615980", held for 22.784266536s
	I0116 03:44:33.781917  507257 main.go:141] libmachine: (embed-certs-615980) Calling .DriverName
	I0116 03:44:33.782225  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetIP
	I0116 03:44:33.785113  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:33.785495  507257 main.go:141] libmachine: (embed-certs-615980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:a6:40", ip: ""} in network mk-embed-certs-615980: {Iface:virbr4 ExpiryTime:2024-01-16 04:44:24 +0000 UTC Type:0 Mac:52:54:00:d4:a6:40 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-615980 Clientid:01:52:54:00:d4:a6:40}
	I0116 03:44:33.785530  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined IP address 192.168.72.159 and MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:33.785718  507257 main.go:141] libmachine: (embed-certs-615980) Calling .DriverName
	I0116 03:44:33.786427  507257 main.go:141] libmachine: (embed-certs-615980) Calling .DriverName
	I0116 03:44:33.786655  507257 main.go:141] libmachine: (embed-certs-615980) Calling .DriverName
	I0116 03:44:33.786751  507257 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 03:44:33.786799  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHHostname
	I0116 03:44:33.786938  507257 ssh_runner.go:195] Run: cat /version.json
	I0116 03:44:33.786967  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHHostname
	I0116 03:44:33.790084  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:33.790288  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:33.790454  507257 main.go:141] libmachine: (embed-certs-615980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:a6:40", ip: ""} in network mk-embed-certs-615980: {Iface:virbr4 ExpiryTime:2024-01-16 04:44:24 +0000 UTC Type:0 Mac:52:54:00:d4:a6:40 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-615980 Clientid:01:52:54:00:d4:a6:40}
	I0116 03:44:33.790485  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined IP address 192.168.72.159 and MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:33.790655  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHPort
	I0116 03:44:33.790787  507257 main.go:141] libmachine: (embed-certs-615980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:a6:40", ip: ""} in network mk-embed-certs-615980: {Iface:virbr4 ExpiryTime:2024-01-16 04:44:24 +0000 UTC Type:0 Mac:52:54:00:d4:a6:40 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-615980 Clientid:01:52:54:00:d4:a6:40}
	I0116 03:44:33.790831  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined IP address 192.168.72.159 and MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:33.790899  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHKeyPath
	I0116 03:44:33.791007  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHPort
	I0116 03:44:33.791091  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHUsername
	I0116 03:44:33.791193  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHKeyPath
	I0116 03:44:33.791269  507257 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/embed-certs-615980/id_rsa Username:docker}
	I0116 03:44:33.791363  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHUsername
	I0116 03:44:33.791515  507257 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/embed-certs-615980/id_rsa Username:docker}
	I0116 03:44:33.907036  507257 ssh_runner.go:195] Run: systemctl --version
	I0116 03:44:33.913776  507257 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 03:44:34.062888  507257 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0116 03:44:34.070435  507257 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 03:44:34.070539  507257 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 03:44:34.091957  507257 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0116 03:44:34.091993  507257 start.go:475] detecting cgroup driver to use...
	I0116 03:44:34.092099  507257 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 03:44:34.108007  507257 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 03:44:34.123223  507257 docker.go:217] disabling cri-docker service (if available) ...
	I0116 03:44:34.123314  507257 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 03:44:34.141242  507257 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 03:44:34.157053  507257 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 03:44:34.274186  507257 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 03:44:34.427694  507257 docker.go:233] disabling docker service ...
	I0116 03:44:34.427785  507257 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 03:44:34.442789  507257 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 03:44:34.459761  507257 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 03:44:34.592453  507257 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 03:44:34.715991  507257 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 03:44:34.732175  507257 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 03:44:34.751885  507257 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0116 03:44:34.751989  507257 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:44:34.763769  507257 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 03:44:34.763853  507257 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:44:34.774444  507257 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:44:34.784975  507257 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:44:34.797634  507257 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 03:44:34.810962  507257 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 03:44:34.822224  507257 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0116 03:44:34.822314  507257 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0116 03:44:34.840500  507257 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 03:44:34.852285  507257 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 03:44:34.970828  507257 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 03:44:35.163097  507257 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 03:44:35.163242  507257 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 03:44:35.169041  507257 start.go:543] Will wait 60s for crictl version
	I0116 03:44:35.169150  507257 ssh_runner.go:195] Run: which crictl
	I0116 03:44:35.173367  507257 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 03:44:35.224951  507257 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0116 03:44:35.225043  507257 ssh_runner.go:195] Run: crio --version
	I0116 03:44:35.275230  507257 ssh_runner.go:195] Run: crio --version
	I0116 03:44:35.329852  507257 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0116 03:44:30.981714  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:33.476735  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:35.480715  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:35.331327  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetIP
	I0116 03:44:35.334148  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:35.334618  507257 main.go:141] libmachine: (embed-certs-615980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:a6:40", ip: ""} in network mk-embed-certs-615980: {Iface:virbr4 ExpiryTime:2024-01-16 04:44:24 +0000 UTC Type:0 Mac:52:54:00:d4:a6:40 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-615980 Clientid:01:52:54:00:d4:a6:40}
	I0116 03:44:35.334674  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined IP address 192.168.72.159 and MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:35.335166  507257 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0116 03:44:35.341389  507257 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 03:44:35.358757  507257 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 03:44:35.358866  507257 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 03:44:35.407869  507257 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0116 03:44:35.407983  507257 ssh_runner.go:195] Run: which lz4
	I0116 03:44:35.412533  507257 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0116 03:44:35.417266  507257 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0116 03:44:35.417303  507257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0116 03:44:31.715897  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:44:32.215978  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:44:32.716439  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:44:33.215609  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:44:33.715785  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:44:33.738611  507889 api_server.go:72] duration metric: took 2.523281585s to wait for apiserver process to appear ...
	I0116 03:44:33.738642  507889 api_server.go:88] waiting for apiserver healthz status ...
	I0116 03:44:33.738663  507889 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8444/healthz ...
	I0116 03:44:37.601011  507889 api_server.go:279] https://192.168.50.236:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 03:44:37.601052  507889 api_server.go:103] status: https://192.168.50.236:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 03:44:37.601072  507889 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8444/healthz ...
	I0116 03:44:37.678390  507889 api_server.go:279] https://192.168.50.236:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 03:44:37.678428  507889 api_server.go:103] status: https://192.168.50.236:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 03:44:37.739725  507889 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8444/healthz ...
	I0116 03:44:37.767384  507889 api_server.go:279] https://192.168.50.236:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 03:44:37.767425  507889 api_server.go:103] status: https://192.168.50.236:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 03:44:38.238992  507889 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8444/healthz ...
	I0116 03:44:38.253946  507889 api_server.go:279] https://192.168.50.236:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 03:44:38.253991  507889 api_server.go:103] status: https://192.168.50.236:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 03:44:38.738786  507889 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8444/healthz ...
	I0116 03:44:38.749091  507889 api_server.go:279] https://192.168.50.236:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 03:44:38.749135  507889 api_server.go:103] status: https://192.168.50.236:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 03:44:39.239814  507889 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8444/healthz ...
	I0116 03:44:39.245859  507889 api_server.go:279] https://192.168.50.236:8444/healthz returned 200:
	ok
	I0116 03:44:39.259198  507889 api_server.go:141] control plane version: v1.28.4
	I0116 03:44:39.259250  507889 api_server.go:131] duration metric: took 5.520598732s to wait for apiserver health ...
	I0116 03:44:39.259265  507889 cni.go:84] Creating CNI manager for ""
	I0116 03:44:39.259277  507889 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:44:39.261389  507889 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 03:44:34.897727  507510 retry.go:31] will retry after 6.879493351s: kubelet not initialised
	I0116 03:44:37.975671  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:39.979781  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:37.524763  507257 crio.go:444] Took 2.112278 seconds to copy over tarball
	I0116 03:44:37.524843  507257 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0116 03:44:40.706515  507257 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.181629969s)
	I0116 03:44:40.706559  507257 crio.go:451] Took 3.181765 seconds to extract the tarball
	I0116 03:44:40.706574  507257 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0116 03:44:40.751207  507257 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 03:44:40.905548  507257 crio.go:496] all images are preloaded for cri-o runtime.
	I0116 03:44:40.905578  507257 cache_images.go:84] Images are preloaded, skipping loading
	I0116 03:44:40.905659  507257 ssh_runner.go:195] Run: crio config
	I0116 03:44:40.965159  507257 cni.go:84] Creating CNI manager for ""
	I0116 03:44:40.965194  507257 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:44:40.965220  507257 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 03:44:40.965263  507257 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.159 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-615980 NodeName:embed-certs-615980 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.159"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.159 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0116 03:44:40.965474  507257 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.159
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-615980"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.159
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.159"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 03:44:40.965578  507257 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-615980 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.159
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-615980 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0116 03:44:40.965634  507257 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0116 03:44:40.976015  507257 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 03:44:40.976153  507257 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0116 03:44:40.986610  507257 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0116 03:44:41.005297  507257 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0116 03:44:41.026383  507257 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0116 03:44:41.046554  507257 ssh_runner.go:195] Run: grep 192.168.72.159	control-plane.minikube.internal$ /etc/hosts
	I0116 03:44:41.050940  507257 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.159	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 03:44:41.064516  507257 certs.go:56] Setting up /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/embed-certs-615980 for IP: 192.168.72.159
	I0116 03:44:41.064568  507257 certs.go:190] acquiring lock for shared ca certs: {Name:mkab5aeea3e0ed69f3592dc8f418e07c6075b130 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:44:41.064749  507257 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17965-468241/.minikube/ca.key
	I0116 03:44:41.064813  507257 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.key
	I0116 03:44:41.064917  507257 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/embed-certs-615980/client.key
	I0116 03:44:41.064989  507257 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/embed-certs-615980/apiserver.key.fc98a751
	I0116 03:44:41.065044  507257 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/embed-certs-615980/proxy-client.key
	I0116 03:44:41.065202  507257 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478.pem (1338 bytes)
	W0116 03:44:41.065241  507257 certs.go:433] ignoring /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478_empty.pem, impossibly tiny 0 bytes
	I0116 03:44:41.065257  507257 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca-key.pem (1679 bytes)
	I0116 03:44:41.065294  507257 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem (1078 bytes)
	I0116 03:44:41.065331  507257 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem (1123 bytes)
	I0116 03:44:41.065374  507257 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem (1679 bytes)
	I0116 03:44:41.065432  507257 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem (1708 bytes)
	I0116 03:44:41.066147  507257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/embed-certs-615980/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0116 03:44:41.092714  507257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/embed-certs-615980/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0116 03:44:41.119109  507257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/embed-certs-615980/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0116 03:44:41.147059  507257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/embed-certs-615980/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0116 03:44:41.176357  507257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 03:44:41.202082  507257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0116 03:44:41.228263  507257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 03:44:41.252892  507257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 03:44:39.263119  507889 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 03:44:39.290175  507889 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 03:44:39.319009  507889 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 03:44:39.341195  507889 system_pods.go:59] 9 kube-system pods found
	I0116 03:44:39.341251  507889 system_pods.go:61] "coredns-5dd5756b68-f8shl" [18bddcd6-4305-4856-b590-e16c362768e3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0116 03:44:39.341264  507889 system_pods.go:61] "coredns-5dd5756b68-pmx8n" [9d3c9941-8938-4d4a-b6d8-9b542ca6f1ca] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0116 03:44:39.341280  507889 system_pods.go:61] "etcd-default-k8s-diff-port-434445" [a2327159-d034-4f62-a8f5-14062a41c75d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0116 03:44:39.341293  507889 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-434445" [75c5fc5e-86b1-42cf-bb5c-8a1f8b4e13db] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0116 03:44:39.341310  507889 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-434445" [2568dd4f-124d-4c4f-8651-3b89b2b54983] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0116 03:44:39.341323  507889 system_pods.go:61] "kube-proxy-dcbqg" [eba1f9bf-6aa7-40cd-b57c-745c2d0cc414] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0116 03:44:39.341335  507889 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-434445" [479952a1-8c3d-49b4-bd22-fef988e6b83d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0116 03:44:39.341353  507889 system_pods.go:61] "metrics-server-57f55c9bc5-894n2" [46e4892a-d026-4a9d-88bc-128e92848808] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:44:39.341369  507889 system_pods.go:61] "storage-provisioner" [16fd4585-3d75-40c3-a28d-4134375f4e3d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0116 03:44:39.341391  507889 system_pods.go:74] duration metric: took 22.354405ms to wait for pod list to return data ...
	I0116 03:44:39.341403  507889 node_conditions.go:102] verifying NodePressure condition ...
	I0116 03:44:39.349904  507889 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 03:44:39.349954  507889 node_conditions.go:123] node cpu capacity is 2
	I0116 03:44:39.349972  507889 node_conditions.go:105] duration metric: took 8.557095ms to run NodePressure ...
	I0116 03:44:39.350000  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:44:39.798882  507889 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0116 03:44:39.816480  507889 kubeadm.go:787] kubelet initialised
	I0116 03:44:39.816514  507889 kubeadm.go:788] duration metric: took 17.598017ms waiting for restarted kubelet to initialise ...
	I0116 03:44:39.816527  507889 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:44:39.834946  507889 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-f8shl" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:39.854785  507889 pod_ready.go:97] node "default-k8s-diff-port-434445" hosting pod "coredns-5dd5756b68-f8shl" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-434445" has status "Ready":"False"
	I0116 03:44:39.854832  507889 pod_ready.go:81] duration metric: took 19.846427ms waiting for pod "coredns-5dd5756b68-f8shl" in "kube-system" namespace to be "Ready" ...
	E0116 03:44:39.854846  507889 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-434445" hosting pod "coredns-5dd5756b68-f8shl" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-434445" has status "Ready":"False"
	I0116 03:44:39.854864  507889 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-pmx8n" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:39.888659  507889 pod_ready.go:97] node "default-k8s-diff-port-434445" hosting pod "coredns-5dd5756b68-pmx8n" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-434445" has status "Ready":"False"
	I0116 03:44:39.888703  507889 pod_ready.go:81] duration metric: took 33.827201ms waiting for pod "coredns-5dd5756b68-pmx8n" in "kube-system" namespace to be "Ready" ...
	E0116 03:44:39.888718  507889 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-434445" hosting pod "coredns-5dd5756b68-pmx8n" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-434445" has status "Ready":"False"
	I0116 03:44:39.888728  507889 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-434445" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:39.897638  507889 pod_ready.go:97] node "default-k8s-diff-port-434445" hosting pod "etcd-default-k8s-diff-port-434445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-434445" has status "Ready":"False"
	I0116 03:44:39.897674  507889 pod_ready.go:81] duration metric: took 8.927103ms waiting for pod "etcd-default-k8s-diff-port-434445" in "kube-system" namespace to be "Ready" ...
	E0116 03:44:39.897693  507889 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-434445" hosting pod "etcd-default-k8s-diff-port-434445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-434445" has status "Ready":"False"
	I0116 03:44:39.897701  507889 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-434445" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:39.919418  507889 pod_ready.go:97] node "default-k8s-diff-port-434445" hosting pod "kube-apiserver-default-k8s-diff-port-434445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-434445" has status "Ready":"False"
	I0116 03:44:39.919465  507889 pod_ready.go:81] duration metric: took 21.753159ms waiting for pod "kube-apiserver-default-k8s-diff-port-434445" in "kube-system" namespace to be "Ready" ...
	E0116 03:44:39.919495  507889 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-434445" hosting pod "kube-apiserver-default-k8s-diff-port-434445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-434445" has status "Ready":"False"
	I0116 03:44:39.919505  507889 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-434445" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:40.203370  507889 pod_ready.go:97] node "default-k8s-diff-port-434445" hosting pod "kube-controller-manager-default-k8s-diff-port-434445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-434445" has status "Ready":"False"
	I0116 03:44:40.203411  507889 pod_ready.go:81] duration metric: took 283.893646ms waiting for pod "kube-controller-manager-default-k8s-diff-port-434445" in "kube-system" namespace to be "Ready" ...
	E0116 03:44:40.203428  507889 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-434445" hosting pod "kube-controller-manager-default-k8s-diff-port-434445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-434445" has status "Ready":"False"
	I0116 03:44:40.203440  507889 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-dcbqg" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:41.417889  507889 pod_ready.go:97] node "default-k8s-diff-port-434445" hosting pod "kube-proxy-dcbqg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-434445" has status "Ready":"False"
	I0116 03:44:41.418011  507889 pod_ready.go:81] duration metric: took 1.214559235s waiting for pod "kube-proxy-dcbqg" in "kube-system" namespace to be "Ready" ...
	E0116 03:44:41.418033  507889 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-434445" hosting pod "kube-proxy-dcbqg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-434445" has status "Ready":"False"
	I0116 03:44:41.418043  507889 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-434445" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:41.425177  507889 pod_ready.go:97] node "default-k8s-diff-port-434445" hosting pod "kube-scheduler-default-k8s-diff-port-434445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-434445" has status "Ready":"False"
	I0116 03:44:41.425208  507889 pod_ready.go:81] duration metric: took 7.15251ms waiting for pod "kube-scheduler-default-k8s-diff-port-434445" in "kube-system" namespace to be "Ready" ...
	E0116 03:44:41.425220  507889 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-434445" hosting pod "kube-scheduler-default-k8s-diff-port-434445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-434445" has status "Ready":"False"
	I0116 03:44:41.425226  507889 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:41.431059  507889 pod_ready.go:97] node "default-k8s-diff-port-434445" hosting pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-434445" has status "Ready":"False"
	I0116 03:44:41.431103  507889 pod_ready.go:81] duration metric: took 5.869165ms waiting for pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace to be "Ready" ...
	E0116 03:44:41.431115  507889 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-434445" hosting pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-434445" has status "Ready":"False"
	I0116 03:44:41.431122  507889 pod_ready.go:38] duration metric: took 1.614582832s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:44:41.431139  507889 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0116 03:44:41.445099  507889 ops.go:34] apiserver oom_adj: -16
	I0116 03:44:41.445129  507889 kubeadm.go:640] restartCluster took 21.83447374s
	I0116 03:44:41.445141  507889 kubeadm.go:406] StartCluster complete in 21.896543184s
	I0116 03:44:41.445168  507889 settings.go:142] acquiring lock: {Name:mk3ded99c06d285b174e76622bb0741c8dd2d8fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:44:41.445265  507889 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17965-468241/kubeconfig
	I0116 03:44:41.447590  507889 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-468241/kubeconfig: {Name:mkac3c63bbcba9b4fa1bea3480797757d703fb9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:44:41.544520  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0116 03:44:41.544743  507889 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0116 03:44:41.544842  507889 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-434445"
	I0116 03:44:41.544858  507889 config.go:182] Loaded profile config "default-k8s-diff-port-434445": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 03:44:41.544875  507889 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-434445"
	I0116 03:44:41.544891  507889 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-434445"
	W0116 03:44:41.544899  507889 addons.go:243] addon metrics-server should already be in state true
	I0116 03:44:41.544865  507889 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-434445"
	W0116 03:44:41.544915  507889 addons.go:243] addon storage-provisioner should already be in state true
	I0116 03:44:41.544971  507889 host.go:66] Checking if "default-k8s-diff-port-434445" exists ...
	I0116 03:44:41.544973  507889 host.go:66] Checking if "default-k8s-diff-port-434445" exists ...
	I0116 03:44:41.544862  507889 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-434445"
	I0116 03:44:41.545107  507889 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-434445"
	I0116 03:44:41.545473  507889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:44:41.545479  507889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:44:41.545505  507889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:44:41.545673  507889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:44:41.562983  507889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44387
	I0116 03:44:41.562984  507889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35715
	I0116 03:44:41.563677  507889 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:44:41.563684  507889 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:44:41.564352  507889 main.go:141] libmachine: Using API Version  1
	I0116 03:44:41.564382  507889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:44:41.564540  507889 main.go:141] libmachine: Using API Version  1
	I0116 03:44:41.564569  507889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:44:41.564753  507889 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:44:41.564937  507889 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:44:41.565113  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetState
	I0116 03:44:41.565350  507889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:44:41.565418  507889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:44:41.569050  507889 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-434445"
	W0116 03:44:41.569091  507889 addons.go:243] addon default-storageclass should already be in state true
	I0116 03:44:41.569125  507889 host.go:66] Checking if "default-k8s-diff-port-434445" exists ...
	I0116 03:44:41.569554  507889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:44:41.569613  507889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:44:41.584107  507889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33349
	I0116 03:44:41.584756  507889 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:44:41.585422  507889 main.go:141] libmachine: Using API Version  1
	I0116 03:44:41.585457  507889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:44:41.585634  507889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42645
	I0116 03:44:41.585856  507889 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:44:41.586123  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetState
	I0116 03:44:41.586162  507889 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:44:41.586636  507889 main.go:141] libmachine: Using API Version  1
	I0116 03:44:41.586663  507889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:44:41.587080  507889 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:44:41.587688  507889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:44:41.587743  507889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:44:41.588214  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .DriverName
	I0116 03:44:41.606456  507889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40289
	I0116 03:44:41.644090  507889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:44:41.819945  507889 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:44:41.929214  507889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:44:41.929680  507889 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:44:42.246642  507889 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 03:44:42.246665  507889 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0116 03:44:42.246696  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHHostname
	I0116 03:44:42.247294  507889 main.go:141] libmachine: Using API Version  1
	I0116 03:44:42.247332  507889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:44:42.247740  507889 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:44:42.247987  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetState
	I0116 03:44:42.250254  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .DriverName
	I0116 03:44:42.250570  507889 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0116 03:44:42.250588  507889 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0116 03:44:42.250609  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHHostname
	I0116 03:44:42.251130  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:42.251863  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ea:d5", ip: ""} in network mk-default-k8s-diff-port-434445: {Iface:virbr1 ExpiryTime:2024-01-16 04:44:01 +0000 UTC Type:0 Mac:52:54:00:78:ea:d5 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:default-k8s-diff-port-434445 Clientid:01:52:54:00:78:ea:d5}
	I0116 03:44:42.251896  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined IP address 192.168.50.236 and MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:42.252245  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHPort
	I0116 03:44:42.252473  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHKeyPath
	I0116 03:44:42.252680  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHUsername
	I0116 03:44:42.252842  507889 sshutil.go:53] new ssh client: &{IP:192.168.50.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/default-k8s-diff-port-434445/id_rsa Username:docker}
	I0116 03:44:42.254224  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:42.254837  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ea:d5", ip: ""} in network mk-default-k8s-diff-port-434445: {Iface:virbr1 ExpiryTime:2024-01-16 04:44:01 +0000 UTC Type:0 Mac:52:54:00:78:ea:d5 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:default-k8s-diff-port-434445 Clientid:01:52:54:00:78:ea:d5}
	I0116 03:44:42.254872  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined IP address 192.168.50.236 and MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:42.255050  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHPort
	I0116 03:44:42.255240  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHKeyPath
	I0116 03:44:42.255422  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHUsername
	I0116 03:44:42.255585  507889 sshutil.go:53] new ssh client: &{IP:192.168.50.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/default-k8s-diff-port-434445/id_rsa Username:docker}
	I0116 03:44:42.264367  507889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36555
	I0116 03:44:42.264832  507889 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:44:42.265322  507889 main.go:141] libmachine: Using API Version  1
	I0116 03:44:42.265352  507889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:44:42.265700  507889 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:44:42.266266  507889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:44:42.266306  507889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:44:42.281852  507889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32929
	I0116 03:44:42.282351  507889 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:44:42.282914  507889 main.go:141] libmachine: Using API Version  1
	I0116 03:44:42.282944  507889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:44:42.283363  507889 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:44:42.283599  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetState
	I0116 03:44:42.285584  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .DriverName
	I0116 03:44:42.395709  507889 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0116 03:44:42.398672  507889 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 03:44:42.493544  507889 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0116 03:44:42.531626  507889 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0116 03:44:42.531683  507889 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0116 03:44:42.531717  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHHostname
	I0116 03:44:42.535980  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:42.536575  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ea:d5", ip: ""} in network mk-default-k8s-diff-port-434445: {Iface:virbr1 ExpiryTime:2024-01-16 04:44:01 +0000 UTC Type:0 Mac:52:54:00:78:ea:d5 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:default-k8s-diff-port-434445 Clientid:01:52:54:00:78:ea:d5}
	I0116 03:44:42.536604  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined IP address 192.168.50.236 and MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:42.537018  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHPort
	I0116 03:44:42.537286  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHKeyPath
	I0116 03:44:42.537510  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHUsername
	I0116 03:44:42.537850  507889 sshutil.go:53] new ssh client: &{IP:192.168.50.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/default-k8s-diff-port-434445/id_rsa Username:docker}
	I0116 03:44:42.545910  507889 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.001352094s)
	I0116 03:44:42.545983  507889 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0116 03:44:42.713693  507889 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0116 03:44:42.713718  507889 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0116 03:44:42.752674  507889 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0116 03:44:42.752717  507889 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0116 03:44:42.790178  507889 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 03:44:42.790214  507889 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0116 03:44:42.825256  507889 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 03:44:43.010741  507889 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-434445" context rescaled to 1 replicas
	I0116 03:44:43.010801  507889 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.236 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 03:44:43.014031  507889 out.go:177] * Verifying Kubernetes components...
	I0116 03:44:43.016143  507889 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:44:44.415462  507889 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.921726194s)
	I0116 03:44:44.415532  507889 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.921908068s)
	I0116 03:44:44.415547  507889 main.go:141] libmachine: Making call to close driver server
	I0116 03:44:44.415631  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .Close
	I0116 03:44:44.415579  507889 main.go:141] libmachine: Making call to close driver server
	I0116 03:44:44.415854  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .Close
	I0116 03:44:44.416266  507889 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:44:44.416376  507889 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:44:44.416393  507889 main.go:141] libmachine: Making call to close driver server
	I0116 03:44:44.416424  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .Close
	I0116 03:44:44.416310  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | Closing plugin on server side
	I0116 03:44:44.416310  507889 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:44:44.416595  507889 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:44:44.416658  507889 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:44:44.416671  507889 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:44:44.416977  507889 main.go:141] libmachine: Making call to close driver server
	I0116 03:44:44.417014  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .Close
	I0116 03:44:44.416332  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | Closing plugin on server side
	I0116 03:44:44.417305  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | Closing plugin on server side
	I0116 03:44:44.417358  507889 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:44:44.417375  507889 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:44:44.450870  507889 main.go:141] libmachine: Making call to close driver server
	I0116 03:44:44.450908  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .Close
	I0116 03:44:44.451327  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | Closing plugin on server side
	I0116 03:44:44.451367  507889 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:44:44.451378  507889 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:44:44.496654  507889 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.671338305s)
	I0116 03:44:44.496732  507889 main.go:141] libmachine: Making call to close driver server
	I0116 03:44:44.496744  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .Close
	I0116 03:44:44.496678  507889 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.480503621s)
	I0116 03:44:44.496845  507889 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-434445" to be "Ready" ...
	I0116 03:44:44.497092  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | Closing plugin on server side
	I0116 03:44:44.497088  507889 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:44:44.497166  507889 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:44:44.497188  507889 main.go:141] libmachine: Making call to close driver server
	I0116 03:44:44.497198  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .Close
	I0116 03:44:44.497445  507889 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:44:44.497489  507889 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:44:44.497499  507889 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-434445"
	I0116 03:44:44.497502  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | Closing plugin on server side
	I0116 03:44:44.500234  507889 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0116 03:44:42.355473  507510 retry.go:31] will retry after 6.423018357s: kubelet not initialised
	I0116 03:44:42.543045  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:44.974520  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:41.280410  507257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem --> /usr/share/ca-certificates/4754782.pem (1708 bytes)
	I0116 03:44:41.488388  507257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 03:44:41.515741  507257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478.pem --> /usr/share/ca-certificates/475478.pem (1338 bytes)
	I0116 03:44:41.541744  507257 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0116 03:44:41.564056  507257 ssh_runner.go:195] Run: openssl version
	I0116 03:44:41.571197  507257 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 03:44:41.586430  507257 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:44:41.592334  507257 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 02:35 /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:44:41.592405  507257 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:44:41.599013  507257 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 03:44:41.612793  507257 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/475478.pem && ln -fs /usr/share/ca-certificates/475478.pem /etc/ssl/certs/475478.pem"
	I0116 03:44:41.624554  507257 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/475478.pem
	I0116 03:44:41.629558  507257 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 02:44 /usr/share/ca-certificates/475478.pem
	I0116 03:44:41.629643  507257 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/475478.pem
	I0116 03:44:41.635518  507257 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/475478.pem /etc/ssl/certs/51391683.0"
	I0116 03:44:41.649567  507257 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4754782.pem && ln -fs /usr/share/ca-certificates/4754782.pem /etc/ssl/certs/4754782.pem"
	I0116 03:44:41.662276  507257 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4754782.pem
	I0116 03:44:41.667618  507257 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 02:44 /usr/share/ca-certificates/4754782.pem
	I0116 03:44:41.667699  507257 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4754782.pem
	I0116 03:44:41.678158  507257 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4754782.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 03:44:41.692147  507257 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 03:44:41.698226  507257 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0116 03:44:41.706563  507257 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0116 03:44:41.713387  507257 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0116 03:44:41.721243  507257 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0116 03:44:41.728346  507257 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0116 03:44:41.735446  507257 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0116 03:44:41.743670  507257 kubeadm.go:404] StartCluster: {Name:embed-certs-615980 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-615980 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.159 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 03:44:41.743786  507257 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0116 03:44:41.743860  507257 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 03:44:41.799605  507257 cri.go:89] found id: ""
	I0116 03:44:41.799700  507257 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0116 03:44:41.812356  507257 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0116 03:44:41.812388  507257 kubeadm.go:636] restartCluster start
	I0116 03:44:41.812457  507257 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0116 03:44:41.823906  507257 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:41.825131  507257 kubeconfig.go:92] found "embed-certs-615980" server: "https://192.168.72.159:8443"
	I0116 03:44:41.827484  507257 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0116 03:44:41.838289  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:41.838386  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:41.852927  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:42.338430  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:42.338548  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:42.353029  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:42.838419  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:42.838526  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:42.854254  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:43.338802  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:43.338934  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:43.356427  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:43.839009  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:43.839103  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:43.853265  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:44.338711  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:44.338803  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:44.353364  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:44.838956  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:44.839070  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:44.851711  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:45.339282  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:45.339397  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:45.354275  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:45.838803  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:45.838899  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:45.853557  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:44.501958  507889 addons.go:505] enable addons completed in 2.957229306s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0116 03:44:46.502807  507889 node_ready.go:58] node "default-k8s-diff-port-434445" has status "Ready":"False"
	I0116 03:44:48.786485  507510 retry.go:31] will retry after 18.441149821s: kubelet not initialised
	I0116 03:44:46.975660  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:48.981964  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:46.339198  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:46.339328  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:46.356092  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:46.839356  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:46.839461  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:46.857070  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:47.338405  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:47.338546  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:47.354976  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:47.839369  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:47.839468  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:47.854465  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:48.339102  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:48.339217  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:48.352361  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:48.838853  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:48.838968  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:48.853271  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:49.338643  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:49.338751  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:49.353674  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:49.839214  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:49.839309  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:49.852699  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:50.339060  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:50.339186  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:50.353143  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:50.838646  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:50.838782  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:50.852767  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:48.005726  507889 node_ready.go:49] node "default-k8s-diff-port-434445" has status "Ready":"True"
	I0116 03:44:48.005760  507889 node_ready.go:38] duration metric: took 3.508890685s waiting for node "default-k8s-diff-port-434445" to be "Ready" ...
	I0116 03:44:48.005775  507889 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:44:48.015385  507889 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-f8shl" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:48.027358  507889 pod_ready.go:92] pod "coredns-5dd5756b68-f8shl" in "kube-system" namespace has status "Ready":"True"
	I0116 03:44:48.027383  507889 pod_ready.go:81] duration metric: took 11.966322ms waiting for pod "coredns-5dd5756b68-f8shl" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:48.027397  507889 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-pmx8n" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:48.034156  507889 pod_ready.go:92] pod "coredns-5dd5756b68-pmx8n" in "kube-system" namespace has status "Ready":"True"
	I0116 03:44:48.034179  507889 pod_ready.go:81] duration metric: took 6.775784ms waiting for pod "coredns-5dd5756b68-pmx8n" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:48.034188  507889 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-434445" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:48.039933  507889 pod_ready.go:92] pod "etcd-default-k8s-diff-port-434445" in "kube-system" namespace has status "Ready":"True"
	I0116 03:44:48.039954  507889 pod_ready.go:81] duration metric: took 5.758946ms waiting for pod "etcd-default-k8s-diff-port-434445" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:48.039964  507889 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-434445" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:48.045351  507889 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-434445" in "kube-system" namespace has status "Ready":"True"
	I0116 03:44:48.045376  507889 pod_ready.go:81] duration metric: took 5.405684ms waiting for pod "kube-apiserver-default-k8s-diff-port-434445" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:48.045386  507889 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-434445" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:48.413479  507889 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-434445" in "kube-system" namespace has status "Ready":"True"
	I0116 03:44:48.413508  507889 pod_ready.go:81] duration metric: took 368.114361ms waiting for pod "kube-controller-manager-default-k8s-diff-port-434445" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:48.413522  507889 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dcbqg" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:48.808095  507889 pod_ready.go:92] pod "kube-proxy-dcbqg" in "kube-system" namespace has status "Ready":"True"
	I0116 03:44:48.808132  507889 pod_ready.go:81] duration metric: took 394.600854ms waiting for pod "kube-proxy-dcbqg" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:48.808147  507889 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-434445" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:50.817248  507889 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-434445" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:51.474904  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:53.475529  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:55.475807  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:51.339105  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:51.339225  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:51.352821  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:51.838856  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:51.838985  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:51.852211  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:51.852258  507257 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0116 03:44:51.852271  507257 kubeadm.go:1135] stopping kube-system containers ...
	I0116 03:44:51.852289  507257 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0116 03:44:51.852360  507257 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 03:44:51.897049  507257 cri.go:89] found id: ""
	I0116 03:44:51.897139  507257 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0116 03:44:51.915124  507257 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 03:44:51.926221  507257 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 03:44:51.926311  507257 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 03:44:51.938314  507257 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0116 03:44:51.938358  507257 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:44:52.077173  507257 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:44:52.733999  507257 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:44:52.971172  507257 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:44:53.063705  507257 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:44:53.200256  507257 api_server.go:52] waiting for apiserver process to appear ...
	I0116 03:44:53.200364  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:44:53.701337  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:44:54.201266  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:44:54.700485  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:44:55.200720  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:44:55.701348  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:44:55.725792  507257 api_server.go:72] duration metric: took 2.52553608s to wait for apiserver process to appear ...
	I0116 03:44:55.725826  507257 api_server.go:88] waiting for apiserver healthz status ...
	I0116 03:44:55.725851  507257 api_server.go:253] Checking apiserver healthz at https://192.168.72.159:8443/healthz ...
	I0116 03:44:52.317689  507889 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-434445" in "kube-system" namespace has status "Ready":"True"
	I0116 03:44:52.317718  507889 pod_ready.go:81] duration metric: took 3.509561404s waiting for pod "kube-scheduler-default-k8s-diff-port-434445" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:52.317731  507889 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:54.326412  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:56.327634  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:57.974017  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:59.977499  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:59.850423  507257 api_server.go:279] https://192.168.72.159:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 03:44:59.850456  507257 api_server.go:103] status: https://192.168.72.159:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 03:44:59.850471  507257 api_server.go:253] Checking apiserver healthz at https://192.168.72.159:8443/healthz ...
	I0116 03:44:59.998251  507257 api_server.go:279] https://192.168.72.159:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 03:44:59.998310  507257 api_server.go:103] status: https://192.168.72.159:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 03:45:00.226594  507257 api_server.go:253] Checking apiserver healthz at https://192.168.72.159:8443/healthz ...
	I0116 03:45:00.233826  507257 api_server.go:279] https://192.168.72.159:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 03:45:00.233876  507257 api_server.go:103] status: https://192.168.72.159:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 03:45:00.726919  507257 api_server.go:253] Checking apiserver healthz at https://192.168.72.159:8443/healthz ...
	I0116 03:45:00.732711  507257 api_server.go:279] https://192.168.72.159:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 03:45:00.732748  507257 api_server.go:103] status: https://192.168.72.159:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 03:45:01.226693  507257 api_server.go:253] Checking apiserver healthz at https://192.168.72.159:8443/healthz ...
	I0116 03:45:01.232420  507257 api_server.go:279] https://192.168.72.159:8443/healthz returned 200:
	ok
	I0116 03:45:01.242029  507257 api_server.go:141] control plane version: v1.28.4
	I0116 03:45:01.242078  507257 api_server.go:131] duration metric: took 5.516243097s to wait for apiserver health ...
	I0116 03:45:01.242092  507257 cni.go:84] Creating CNI manager for ""
	I0116 03:45:01.242101  507257 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:45:01.244395  507257 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 03:45:01.246155  507257 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 03:44:58.827760  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:01.327190  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:02.475858  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:04.974991  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:01.270205  507257 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 03:45:01.350402  507257 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 03:45:01.384475  507257 system_pods.go:59] 8 kube-system pods found
	I0116 03:45:01.384536  507257 system_pods.go:61] "coredns-5dd5756b68-ddjkl" [fe342d2a-7d12-4b37-be29-c0d77b920964] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0116 03:45:01.384549  507257 system_pods.go:61] "etcd-embed-certs-615980" [7b7af2e1-b3bb-4c47-862b-838167453939] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0116 03:45:01.384562  507257 system_pods.go:61] "kube-apiserver-embed-certs-615980" [bb883c31-8391-467f-9b4a-affb05a56d49] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0116 03:45:01.384571  507257 system_pods.go:61] "kube-controller-manager-embed-certs-615980" [74f7c5e3-818c-4e15-b693-d4f81308bf9d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0116 03:45:01.384584  507257 system_pods.go:61] "kube-proxy-6jpr7" [e62c9202-8b4f-4fe7-8aa4-b931afd4b028] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0116 03:45:01.384602  507257 system_pods.go:61] "kube-scheduler-embed-certs-615980" [f03d5c9c-af6a-437b-92bb-7c5a46259bbd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0116 03:45:01.384618  507257 system_pods.go:61] "metrics-server-57f55c9bc5-48gnw" [1fcb32b6-f985-428d-8f02-1198d704d8c7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:45:01.384632  507257 system_pods.go:61] "storage-provisioner" [6264adaa-89e8-4f0d-9394-d7325338a2f5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0116 03:45:01.384642  507257 system_pods.go:74] duration metric: took 34.114711ms to wait for pod list to return data ...
	I0116 03:45:01.384656  507257 node_conditions.go:102] verifying NodePressure condition ...
	I0116 03:45:01.392555  507257 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 03:45:01.392597  507257 node_conditions.go:123] node cpu capacity is 2
	I0116 03:45:01.392614  507257 node_conditions.go:105] duration metric: took 7.946538ms to run NodePressure ...
	I0116 03:45:01.392644  507257 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:45:01.788178  507257 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0116 03:45:01.795913  507257 kubeadm.go:787] kubelet initialised
	I0116 03:45:01.795945  507257 kubeadm.go:788] duration metric: took 7.737644ms waiting for restarted kubelet to initialise ...
	I0116 03:45:01.795955  507257 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:45:01.806433  507257 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-ddjkl" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:03.815645  507257 pod_ready.go:102] pod "coredns-5dd5756b68-ddjkl" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:05.821193  507257 pod_ready.go:92] pod "coredns-5dd5756b68-ddjkl" in "kube-system" namespace has status "Ready":"True"
	I0116 03:45:05.821231  507257 pod_ready.go:81] duration metric: took 4.014760393s waiting for pod "coredns-5dd5756b68-ddjkl" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:05.821245  507257 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-615980" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:03.825695  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:05.826742  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:07.234109  507510 kubeadm.go:787] kubelet initialised
	I0116 03:45:07.234137  507510 kubeadm.go:788] duration metric: took 45.657540747s waiting for restarted kubelet to initialise ...
	I0116 03:45:07.234145  507510 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:45:07.239858  507510 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-7q4wc" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:07.247210  507510 pod_ready.go:92] pod "coredns-5644d7b6d9-7q4wc" in "kube-system" namespace has status "Ready":"True"
	I0116 03:45:07.247237  507510 pod_ready.go:81] duration metric: took 7.336988ms waiting for pod "coredns-5644d7b6d9-7q4wc" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:07.247249  507510 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-q2gxq" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:07.252865  507510 pod_ready.go:92] pod "coredns-5644d7b6d9-q2gxq" in "kube-system" namespace has status "Ready":"True"
	I0116 03:45:07.252900  507510 pod_ready.go:81] duration metric: took 5.642204ms waiting for pod "coredns-5644d7b6d9-q2gxq" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:07.252925  507510 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-696770" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:07.259169  507510 pod_ready.go:92] pod "etcd-old-k8s-version-696770" in "kube-system" namespace has status "Ready":"True"
	I0116 03:45:07.259193  507510 pod_ready.go:81] duration metric: took 6.260142ms waiting for pod "etcd-old-k8s-version-696770" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:07.259202  507510 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-696770" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:07.264591  507510 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-696770" in "kube-system" namespace has status "Ready":"True"
	I0116 03:45:07.264622  507510 pod_ready.go:81] duration metric: took 5.411866ms waiting for pod "kube-apiserver-old-k8s-version-696770" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:07.264635  507510 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-696770" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:07.632057  507510 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-696770" in "kube-system" namespace has status "Ready":"True"
	I0116 03:45:07.632093  507510 pod_ready.go:81] duration metric: took 367.447202ms waiting for pod "kube-controller-manager-old-k8s-version-696770" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:07.632110  507510 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-9pfdj" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:08.033002  507510 pod_ready.go:92] pod "kube-proxy-9pfdj" in "kube-system" namespace has status "Ready":"True"
	I0116 03:45:08.033028  507510 pod_ready.go:81] duration metric: took 400.910907ms waiting for pod "kube-proxy-9pfdj" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:08.033038  507510 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-696770" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:08.433134  507510 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-696770" in "kube-system" namespace has status "Ready":"True"
	I0116 03:45:08.433165  507510 pod_ready.go:81] duration metric: took 400.1203ms waiting for pod "kube-scheduler-old-k8s-version-696770" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:08.433180  507510 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:07.485372  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:09.979593  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:07.830703  507257 pod_ready.go:102] pod "etcd-embed-certs-615980" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:10.328466  507257 pod_ready.go:102] pod "etcd-embed-certs-615980" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:08.325925  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:10.825155  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:10.442598  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:12.941713  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:12.478975  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:14.480154  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:12.329199  507257 pod_ready.go:102] pod "etcd-embed-certs-615980" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:13.830177  507257 pod_ready.go:92] pod "etcd-embed-certs-615980" in "kube-system" namespace has status "Ready":"True"
	I0116 03:45:13.830207  507257 pod_ready.go:81] duration metric: took 8.008954008s waiting for pod "etcd-embed-certs-615980" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:13.830217  507257 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-615980" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:13.837420  507257 pod_ready.go:92] pod "kube-apiserver-embed-certs-615980" in "kube-system" namespace has status "Ready":"True"
	I0116 03:45:13.837448  507257 pod_ready.go:81] duration metric: took 7.22323ms waiting for pod "kube-apiserver-embed-certs-615980" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:13.837461  507257 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-615980" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:13.845996  507257 pod_ready.go:92] pod "kube-controller-manager-embed-certs-615980" in "kube-system" namespace has status "Ready":"True"
	I0116 03:45:13.846029  507257 pod_ready.go:81] duration metric: took 8.558317ms waiting for pod "kube-controller-manager-embed-certs-615980" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:13.846040  507257 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-6jpr7" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:13.852645  507257 pod_ready.go:92] pod "kube-proxy-6jpr7" in "kube-system" namespace has status "Ready":"True"
	I0116 03:45:13.852674  507257 pod_ready.go:81] duration metric: took 6.627181ms waiting for pod "kube-proxy-6jpr7" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:13.852683  507257 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-615980" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:13.858818  507257 pod_ready.go:92] pod "kube-scheduler-embed-certs-615980" in "kube-system" namespace has status "Ready":"True"
	I0116 03:45:13.858844  507257 pod_ready.go:81] duration metric: took 6.154319ms waiting for pod "kube-scheduler-embed-certs-615980" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:13.858853  507257 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:15.867133  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:12.826463  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:14.826507  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:14.942079  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:17.442566  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:16.976095  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:19.477899  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:17.868381  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:20.367064  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:17.326184  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:19.328194  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:19.942113  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:21.942853  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:24.441140  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:21.975337  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:24.474400  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:22.368008  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:24.866716  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:21.825428  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:23.825828  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:25.829356  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:26.441756  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:28.443869  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:26.475939  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:28.476308  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:26.866760  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:29.367575  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:28.326756  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:30.825813  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:30.942631  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:33.440480  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:30.975870  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:33.475828  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:31.866401  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:33.867719  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:33.325388  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:35.325485  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:35.939804  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:37.940883  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:35.974504  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:37.975857  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:39.977413  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:36.367513  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:38.865702  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:40.866834  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:37.325804  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:39.326635  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:40.440287  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:42.440838  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:44.441037  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:42.475940  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:44.981122  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:42.867673  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:45.368285  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:41.825982  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:43.826700  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:45.828002  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:46.443286  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:48.941625  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:47.474621  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:49.475149  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:47.867135  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:49.867865  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:48.326035  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:50.327538  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:50.943718  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:53.443986  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:51.977212  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:54.477161  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:52.368444  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:54.375089  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:52.826163  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:55.327160  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:55.940561  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:57.942988  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:56.975470  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:58.975829  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:56.867648  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:59.367479  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:57.826140  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:59.826286  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:00.440963  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:02.941202  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:00.979308  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:03.474099  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:05.478535  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:01.868806  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:04.368227  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:01.826702  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:04.325060  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:06.326882  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:05.441837  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:07.444944  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:07.975344  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:09.975486  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:06.868137  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:09.367752  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:08.329967  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:10.826182  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:09.940745  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:11.942989  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:14.441331  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:11.977171  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:13.977835  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:11.866817  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:13.867951  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:13.327232  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:15.826862  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:16.442525  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:18.442754  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:16.475367  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:18.475903  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:16.367830  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:18.368100  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:20.866302  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:18.326376  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:20.827236  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:20.940998  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:22.941332  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:20.980371  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:23.476451  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:22.868945  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:25.366857  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:23.326576  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:25.826000  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:25.442029  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:27.941061  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:25.974860  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:27.975178  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:29.978092  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:27.370097  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:29.869827  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:28.326735  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:30.826672  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:30.442579  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:32.941784  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:32.475984  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:34.973934  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:31.870772  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:34.367380  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:32.827910  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:34.828185  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:35.440418  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:37.441206  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:39.441254  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:36.974076  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:38.975169  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:36.867231  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:39.366005  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:37.327553  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:39.826218  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:41.941046  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:43.941530  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:40.976023  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:43.478194  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:41.367293  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:43.867097  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:45.867843  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:42.325426  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:44.325723  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:46.326155  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:46.441175  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:48.940677  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:45.974937  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:47.975141  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:50.474687  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:47.868006  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:49.868890  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:48.326634  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:50.326914  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:50.941220  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:53.440868  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:52.475138  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:54.475546  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:52.365917  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:54.366514  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:52.826279  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:55.324177  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:55.441130  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:57.943093  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:56.976380  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:59.478090  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:56.368894  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:58.868051  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:57.326296  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:59.326416  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:01.327894  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:00.440504  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:02.441176  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:04.442171  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:01.975498  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:03.978490  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:01.369736  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:03.871663  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:03.825943  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:05.828215  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:06.443721  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:08.940212  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:06.475354  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:08.975707  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:06.366468  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:08.366998  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:10.368019  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:08.326243  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:10.824873  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:10.942042  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:13.440495  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:11.475551  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:13.475904  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:12.867030  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:14.872409  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:12.826040  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:15.325658  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:15.941844  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:18.440574  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:15.975125  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:17.977326  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:20.474897  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:17.367390  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:19.369090  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:17.325860  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:19.829310  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:20.940407  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:22.941824  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:22.475218  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:24.477773  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:21.866953  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:23.867055  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:22.326660  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:24.327689  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:25.441214  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:27.442253  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:26.975120  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:29.477805  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:26.367295  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:28.867376  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:26.826666  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:29.327606  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:29.940650  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:31.941021  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:34.443144  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:31.978544  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:34.475301  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:31.367770  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:33.867084  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:35.870968  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:31.826565  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:34.326677  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:36.941363  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:38.942121  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:36.974797  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:38.975027  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:38.368025  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:40.866714  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:36.828347  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:39.327130  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:41.441555  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:43.442806  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:40.977172  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:43.476163  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:43.367966  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:45.867460  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:41.826087  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:43.826389  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:46.326497  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:45.941267  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:48.443875  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:45.974452  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:47.977610  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:50.475536  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:48.367053  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:50.368023  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:48.824924  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:50.825835  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:50.941125  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:52.941644  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:52.975726  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:55.476453  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:52.866871  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:55.367951  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:52.826166  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:54.826434  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:55.442084  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:57.442829  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:57.974382  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:59.974448  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:57.867742  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:00.366490  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:57.325608  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:59.825525  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:59.939515  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:01.941648  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:03.942290  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:01.975159  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:03.977002  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:02.366764  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:04.366818  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:01.831740  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:04.326341  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:06.440494  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:08.940336  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:06.475364  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:08.482783  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:06.367160  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:08.867294  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:06.825331  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:08.826594  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:11.324828  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:10.942696  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:13.441805  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:10.974798  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:12.975009  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:14.976154  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:11.366189  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:13.369852  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:15.867536  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:13.327353  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:15.825738  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:15.941304  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:17.942206  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:17.474204  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:19.475630  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:19.974269  507339 pod_ready.go:81] duration metric: took 4m0.007375913s waiting for pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace to be "Ready" ...
	E0116 03:48:19.974299  507339 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0116 03:48:19.974310  507339 pod_ready.go:38] duration metric: took 4m6.26032663s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:48:19.974365  507339 api_server.go:52] waiting for apiserver process to appear ...
	I0116 03:48:19.974431  507339 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0116 03:48:19.974529  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0116 03:48:20.042853  507339 cri.go:89] found id: "de79f87bc28449077936386f603cc5df933f63455bc96cf6ff7e7c6e4bd32bc4"
	I0116 03:48:20.042886  507339 cri.go:89] found id: ""
	I0116 03:48:20.042896  507339 logs.go:284] 1 containers: [de79f87bc28449077936386f603cc5df933f63455bc96cf6ff7e7c6e4bd32bc4]
	I0116 03:48:20.042961  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:20.049795  507339 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0116 03:48:20.049884  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0116 03:48:20.092507  507339 cri.go:89] found id: "01aaf51cd40b9137f2395404bb15b7372927f81dc8a4e884600dc8cba9bbeb8e"
	I0116 03:48:20.092541  507339 cri.go:89] found id: ""
	I0116 03:48:20.092551  507339 logs.go:284] 1 containers: [01aaf51cd40b9137f2395404bb15b7372927f81dc8a4e884600dc8cba9bbeb8e]
	I0116 03:48:20.092619  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:20.097091  507339 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0116 03:48:20.097176  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0116 03:48:20.139182  507339 cri.go:89] found id: "c13ef036a1014a6bbc5c179a0889fb6ff615988713589da58240b7c637adf687"
	I0116 03:48:20.139218  507339 cri.go:89] found id: ""
	I0116 03:48:20.139229  507339 logs.go:284] 1 containers: [c13ef036a1014a6bbc5c179a0889fb6ff615988713589da58240b7c637adf687]
	I0116 03:48:20.139297  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:20.145129  507339 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0116 03:48:20.145210  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0116 03:48:20.191055  507339 cri.go:89] found id: "33381edd7dded1b5ed158738429b4a7f1f03a6171f2af0832bb5a9864c950725"
	I0116 03:48:20.191090  507339 cri.go:89] found id: ""
	I0116 03:48:20.191098  507339 logs.go:284] 1 containers: [33381edd7dded1b5ed158738429b4a7f1f03a6171f2af0832bb5a9864c950725]
	I0116 03:48:20.191161  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:20.195688  507339 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0116 03:48:20.195765  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0116 03:48:20.242718  507339 cri.go:89] found id: "eba2964f029ac61d9cfb59dc548ae05c832f9b23713f51924291fbfc5de985ed"
	I0116 03:48:20.242746  507339 cri.go:89] found id: ""
	I0116 03:48:20.242754  507339 logs.go:284] 1 containers: [eba2964f029ac61d9cfb59dc548ae05c832f9b23713f51924291fbfc5de985ed]
	I0116 03:48:20.242819  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:20.247312  507339 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0116 03:48:20.247399  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0116 03:48:20.287981  507339 cri.go:89] found id: "802d4c55aa04370ff079f09dfe4670f5dac1142ca6db880afa17be75f1d20a76"
	I0116 03:48:20.288009  507339 cri.go:89] found id: ""
	I0116 03:48:20.288018  507339 logs.go:284] 1 containers: [802d4c55aa04370ff079f09dfe4670f5dac1142ca6db880afa17be75f1d20a76]
	I0116 03:48:20.288097  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:20.292370  507339 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0116 03:48:20.292449  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0116 03:48:20.335778  507339 cri.go:89] found id: ""
	I0116 03:48:20.335816  507339 logs.go:284] 0 containers: []
	W0116 03:48:20.335828  507339 logs.go:286] No container was found matching "kindnet"
	I0116 03:48:20.335838  507339 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0116 03:48:20.335906  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0116 03:48:20.381698  507339 cri.go:89] found id: "b7164c1b7732cf6d54642cb9ffc8d9f16696adc0e428c3fa16cf914420d73e1d"
	I0116 03:48:20.381722  507339 cri.go:89] found id: "59754e94eb3cf3fecfedb9077fbad2ecc2618c351fa9bfa3faed0d780fdd9ecb"
	I0116 03:48:20.381727  507339 cri.go:89] found id: ""
	I0116 03:48:20.381734  507339 logs.go:284] 2 containers: [b7164c1b7732cf6d54642cb9ffc8d9f16696adc0e428c3fa16cf914420d73e1d 59754e94eb3cf3fecfedb9077fbad2ecc2618c351fa9bfa3faed0d780fdd9ecb]
	I0116 03:48:20.381790  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:20.386880  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:20.391292  507339 logs.go:123] Gathering logs for describe nodes ...
	I0116 03:48:20.391324  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0116 03:48:20.528154  507339 logs.go:123] Gathering logs for storage-provisioner [b7164c1b7732cf6d54642cb9ffc8d9f16696adc0e428c3fa16cf914420d73e1d] ...
	I0116 03:48:20.528197  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b7164c1b7732cf6d54642cb9ffc8d9f16696adc0e428c3fa16cf914420d73e1d"
	I0116 03:48:20.586645  507339 logs.go:123] Gathering logs for CRI-O ...
	I0116 03:48:20.586680  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0116 03:48:18.367415  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:20.867678  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:18.325849  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:20.326141  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:20.442138  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:22.442180  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:21.096109  507339 logs.go:123] Gathering logs for kubelet ...
	I0116 03:48:21.096155  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0116 03:48:21.154531  507339 logs.go:123] Gathering logs for coredns [c13ef036a1014a6bbc5c179a0889fb6ff615988713589da58240b7c637adf687] ...
	I0116 03:48:21.154577  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c13ef036a1014a6bbc5c179a0889fb6ff615988713589da58240b7c637adf687"
	I0116 03:48:21.203708  507339 logs.go:123] Gathering logs for dmesg ...
	I0116 03:48:21.203760  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0116 03:48:21.219320  507339 logs.go:123] Gathering logs for etcd [01aaf51cd40b9137f2395404bb15b7372927f81dc8a4e884600dc8cba9bbeb8e] ...
	I0116 03:48:21.219362  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 01aaf51cd40b9137f2395404bb15b7372927f81dc8a4e884600dc8cba9bbeb8e"
	I0116 03:48:21.271759  507339 logs.go:123] Gathering logs for kube-proxy [eba2964f029ac61d9cfb59dc548ae05c832f9b23713f51924291fbfc5de985ed] ...
	I0116 03:48:21.271812  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eba2964f029ac61d9cfb59dc548ae05c832f9b23713f51924291fbfc5de985ed"
	I0116 03:48:21.316786  507339 logs.go:123] Gathering logs for kube-controller-manager [802d4c55aa04370ff079f09dfe4670f5dac1142ca6db880afa17be75f1d20a76] ...
	I0116 03:48:21.316825  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 802d4c55aa04370ff079f09dfe4670f5dac1142ca6db880afa17be75f1d20a76"
	I0116 03:48:21.383743  507339 logs.go:123] Gathering logs for storage-provisioner [59754e94eb3cf3fecfedb9077fbad2ecc2618c351fa9bfa3faed0d780fdd9ecb] ...
	I0116 03:48:21.383783  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59754e94eb3cf3fecfedb9077fbad2ecc2618c351fa9bfa3faed0d780fdd9ecb"
	I0116 03:48:21.422893  507339 logs.go:123] Gathering logs for kube-apiserver [de79f87bc28449077936386f603cc5df933f63455bc96cf6ff7e7c6e4bd32bc4] ...
	I0116 03:48:21.422926  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de79f87bc28449077936386f603cc5df933f63455bc96cf6ff7e7c6e4bd32bc4"
	I0116 03:48:21.473295  507339 logs.go:123] Gathering logs for kube-scheduler [33381edd7dded1b5ed158738429b4a7f1f03a6171f2af0832bb5a9864c950725] ...
	I0116 03:48:21.473332  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33381edd7dded1b5ed158738429b4a7f1f03a6171f2af0832bb5a9864c950725"
	I0116 03:48:21.527066  507339 logs.go:123] Gathering logs for container status ...
	I0116 03:48:21.527110  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0116 03:48:24.085743  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:48:24.105359  507339 api_server.go:72] duration metric: took 4m17.107229414s to wait for apiserver process to appear ...
	I0116 03:48:24.105395  507339 api_server.go:88] waiting for apiserver healthz status ...
	I0116 03:48:24.105450  507339 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0116 03:48:24.105567  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0116 03:48:24.154626  507339 cri.go:89] found id: "de79f87bc28449077936386f603cc5df933f63455bc96cf6ff7e7c6e4bd32bc4"
	I0116 03:48:24.154659  507339 cri.go:89] found id: ""
	I0116 03:48:24.154668  507339 logs.go:284] 1 containers: [de79f87bc28449077936386f603cc5df933f63455bc96cf6ff7e7c6e4bd32bc4]
	I0116 03:48:24.154720  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:24.159657  507339 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0116 03:48:24.159735  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0116 03:48:24.202635  507339 cri.go:89] found id: "01aaf51cd40b9137f2395404bb15b7372927f81dc8a4e884600dc8cba9bbeb8e"
	I0116 03:48:24.202663  507339 cri.go:89] found id: ""
	I0116 03:48:24.202671  507339 logs.go:284] 1 containers: [01aaf51cd40b9137f2395404bb15b7372927f81dc8a4e884600dc8cba9bbeb8e]
	I0116 03:48:24.202725  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:24.207503  507339 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0116 03:48:24.207578  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0116 03:48:24.253893  507339 cri.go:89] found id: "c13ef036a1014a6bbc5c179a0889fb6ff615988713589da58240b7c637adf687"
	I0116 03:48:24.253934  507339 cri.go:89] found id: ""
	I0116 03:48:24.253945  507339 logs.go:284] 1 containers: [c13ef036a1014a6bbc5c179a0889fb6ff615988713589da58240b7c637adf687]
	I0116 03:48:24.254016  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:24.258649  507339 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0116 03:48:24.258733  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0116 03:48:24.306636  507339 cri.go:89] found id: "33381edd7dded1b5ed158738429b4a7f1f03a6171f2af0832bb5a9864c950725"
	I0116 03:48:24.306662  507339 cri.go:89] found id: ""
	I0116 03:48:24.306670  507339 logs.go:284] 1 containers: [33381edd7dded1b5ed158738429b4a7f1f03a6171f2af0832bb5a9864c950725]
	I0116 03:48:24.306721  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:24.311270  507339 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0116 03:48:24.311357  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0116 03:48:24.354635  507339 cri.go:89] found id: "eba2964f029ac61d9cfb59dc548ae05c832f9b23713f51924291fbfc5de985ed"
	I0116 03:48:24.354671  507339 cri.go:89] found id: ""
	I0116 03:48:24.354683  507339 logs.go:284] 1 containers: [eba2964f029ac61d9cfb59dc548ae05c832f9b23713f51924291fbfc5de985ed]
	I0116 03:48:24.354756  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:24.359806  507339 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0116 03:48:24.359889  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0116 03:48:24.418188  507339 cri.go:89] found id: "802d4c55aa04370ff079f09dfe4670f5dac1142ca6db880afa17be75f1d20a76"
	I0116 03:48:24.418239  507339 cri.go:89] found id: ""
	I0116 03:48:24.418251  507339 logs.go:284] 1 containers: [802d4c55aa04370ff079f09dfe4670f5dac1142ca6db880afa17be75f1d20a76]
	I0116 03:48:24.418330  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:24.422943  507339 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0116 03:48:24.423030  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0116 03:48:24.467349  507339 cri.go:89] found id: ""
	I0116 03:48:24.467383  507339 logs.go:284] 0 containers: []
	W0116 03:48:24.467394  507339 logs.go:286] No container was found matching "kindnet"
	I0116 03:48:24.467403  507339 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0116 03:48:24.467466  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0116 03:48:24.517490  507339 cri.go:89] found id: "b7164c1b7732cf6d54642cb9ffc8d9f16696adc0e428c3fa16cf914420d73e1d"
	I0116 03:48:24.517525  507339 cri.go:89] found id: "59754e94eb3cf3fecfedb9077fbad2ecc2618c351fa9bfa3faed0d780fdd9ecb"
	I0116 03:48:24.517539  507339 cri.go:89] found id: ""
	I0116 03:48:24.517548  507339 logs.go:284] 2 containers: [b7164c1b7732cf6d54642cb9ffc8d9f16696adc0e428c3fa16cf914420d73e1d 59754e94eb3cf3fecfedb9077fbad2ecc2618c351fa9bfa3faed0d780fdd9ecb]
	I0116 03:48:24.517619  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:24.521952  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:24.526246  507339 logs.go:123] Gathering logs for kube-apiserver [de79f87bc28449077936386f603cc5df933f63455bc96cf6ff7e7c6e4bd32bc4] ...
	I0116 03:48:24.526277  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de79f87bc28449077936386f603cc5df933f63455bc96cf6ff7e7c6e4bd32bc4"
	I0116 03:48:24.583067  507339 logs.go:123] Gathering logs for kube-proxy [eba2964f029ac61d9cfb59dc548ae05c832f9b23713f51924291fbfc5de985ed] ...
	I0116 03:48:24.583108  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eba2964f029ac61d9cfb59dc548ae05c832f9b23713f51924291fbfc5de985ed"
	I0116 03:48:24.631278  507339 logs.go:123] Gathering logs for CRI-O ...
	I0116 03:48:24.631312  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0116 03:48:25.099279  507339 logs.go:123] Gathering logs for describe nodes ...
	I0116 03:48:25.099330  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0116 03:48:25.241388  507339 logs.go:123] Gathering logs for etcd [01aaf51cd40b9137f2395404bb15b7372927f81dc8a4e884600dc8cba9bbeb8e] ...
	I0116 03:48:25.241433  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 01aaf51cd40b9137f2395404bb15b7372927f81dc8a4e884600dc8cba9bbeb8e"
	I0116 03:48:25.298748  507339 logs.go:123] Gathering logs for storage-provisioner [59754e94eb3cf3fecfedb9077fbad2ecc2618c351fa9bfa3faed0d780fdd9ecb] ...
	I0116 03:48:25.298787  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59754e94eb3cf3fecfedb9077fbad2ecc2618c351fa9bfa3faed0d780fdd9ecb"
	I0116 03:48:25.338169  507339 logs.go:123] Gathering logs for kubelet ...
	I0116 03:48:25.338204  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0116 03:48:25.396275  507339 logs.go:123] Gathering logs for coredns [c13ef036a1014a6bbc5c179a0889fb6ff615988713589da58240b7c637adf687] ...
	I0116 03:48:25.396320  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c13ef036a1014a6bbc5c179a0889fb6ff615988713589da58240b7c637adf687"
	I0116 03:48:25.448028  507339 logs.go:123] Gathering logs for storage-provisioner [b7164c1b7732cf6d54642cb9ffc8d9f16696adc0e428c3fa16cf914420d73e1d] ...
	I0116 03:48:25.448087  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b7164c1b7732cf6d54642cb9ffc8d9f16696adc0e428c3fa16cf914420d73e1d"
	I0116 03:48:25.492640  507339 logs.go:123] Gathering logs for container status ...
	I0116 03:48:25.492673  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0116 03:48:25.541478  507339 logs.go:123] Gathering logs for dmesg ...
	I0116 03:48:25.541572  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0116 03:48:25.557537  507339 logs.go:123] Gathering logs for kube-scheduler [33381edd7dded1b5ed158738429b4a7f1f03a6171f2af0832bb5a9864c950725] ...
	I0116 03:48:25.557569  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33381edd7dded1b5ed158738429b4a7f1f03a6171f2af0832bb5a9864c950725"
	I0116 03:48:25.599921  507339 logs.go:123] Gathering logs for kube-controller-manager [802d4c55aa04370ff079f09dfe4670f5dac1142ca6db880afa17be75f1d20a76] ...
	I0116 03:48:25.599956  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 802d4c55aa04370ff079f09dfe4670f5dac1142ca6db880afa17be75f1d20a76"
	I0116 03:48:23.368308  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:25.368495  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:22.825098  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:24.827094  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:24.942708  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:27.441008  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:29.452010  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:28.158281  507339 api_server.go:253] Checking apiserver healthz at https://192.168.39.103:8443/healthz ...
	I0116 03:48:28.165500  507339 api_server.go:279] https://192.168.39.103:8443/healthz returned 200:
	ok
	I0116 03:48:28.166907  507339 api_server.go:141] control plane version: v1.29.0-rc.2
	I0116 03:48:28.166933  507339 api_server.go:131] duration metric: took 4.061531357s to wait for apiserver health ...
	I0116 03:48:28.166943  507339 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 03:48:28.166996  507339 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0116 03:48:28.167056  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0116 03:48:28.209247  507339 cri.go:89] found id: "de79f87bc28449077936386f603cc5df933f63455bc96cf6ff7e7c6e4bd32bc4"
	I0116 03:48:28.209282  507339 cri.go:89] found id: ""
	I0116 03:48:28.209295  507339 logs.go:284] 1 containers: [de79f87bc28449077936386f603cc5df933f63455bc96cf6ff7e7c6e4bd32bc4]
	I0116 03:48:28.209361  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:28.214044  507339 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0116 03:48:28.214126  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0116 03:48:28.263791  507339 cri.go:89] found id: "01aaf51cd40b9137f2395404bb15b7372927f81dc8a4e884600dc8cba9bbeb8e"
	I0116 03:48:28.263817  507339 cri.go:89] found id: ""
	I0116 03:48:28.263825  507339 logs.go:284] 1 containers: [01aaf51cd40b9137f2395404bb15b7372927f81dc8a4e884600dc8cba9bbeb8e]
	I0116 03:48:28.263889  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:28.268803  507339 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0116 03:48:28.268893  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0116 03:48:28.311035  507339 cri.go:89] found id: "c13ef036a1014a6bbc5c179a0889fb6ff615988713589da58240b7c637adf687"
	I0116 03:48:28.311062  507339 cri.go:89] found id: ""
	I0116 03:48:28.311070  507339 logs.go:284] 1 containers: [c13ef036a1014a6bbc5c179a0889fb6ff615988713589da58240b7c637adf687]
	I0116 03:48:28.311132  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:28.315791  507339 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0116 03:48:28.315871  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0116 03:48:28.366917  507339 cri.go:89] found id: "33381edd7dded1b5ed158738429b4a7f1f03a6171f2af0832bb5a9864c950725"
	I0116 03:48:28.366947  507339 cri.go:89] found id: ""
	I0116 03:48:28.366957  507339 logs.go:284] 1 containers: [33381edd7dded1b5ed158738429b4a7f1f03a6171f2af0832bb5a9864c950725]
	I0116 03:48:28.367028  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:28.372648  507339 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0116 03:48:28.372723  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0116 03:48:28.415530  507339 cri.go:89] found id: "eba2964f029ac61d9cfb59dc548ae05c832f9b23713f51924291fbfc5de985ed"
	I0116 03:48:28.415566  507339 cri.go:89] found id: ""
	I0116 03:48:28.415577  507339 logs.go:284] 1 containers: [eba2964f029ac61d9cfb59dc548ae05c832f9b23713f51924291fbfc5de985ed]
	I0116 03:48:28.415669  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:28.420784  507339 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0116 03:48:28.420865  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0116 03:48:28.474238  507339 cri.go:89] found id: "802d4c55aa04370ff079f09dfe4670f5dac1142ca6db880afa17be75f1d20a76"
	I0116 03:48:28.474262  507339 cri.go:89] found id: ""
	I0116 03:48:28.474270  507339 logs.go:284] 1 containers: [802d4c55aa04370ff079f09dfe4670f5dac1142ca6db880afa17be75f1d20a76]
	I0116 03:48:28.474335  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:28.479547  507339 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0116 03:48:28.479637  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0116 03:48:28.526403  507339 cri.go:89] found id: ""
	I0116 03:48:28.526436  507339 logs.go:284] 0 containers: []
	W0116 03:48:28.526455  507339 logs.go:286] No container was found matching "kindnet"
	I0116 03:48:28.526466  507339 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0116 03:48:28.526535  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0116 03:48:28.572958  507339 cri.go:89] found id: "b7164c1b7732cf6d54642cb9ffc8d9f16696adc0e428c3fa16cf914420d73e1d"
	I0116 03:48:28.572988  507339 cri.go:89] found id: "59754e94eb3cf3fecfedb9077fbad2ecc2618c351fa9bfa3faed0d780fdd9ecb"
	I0116 03:48:28.572994  507339 cri.go:89] found id: ""
	I0116 03:48:28.573002  507339 logs.go:284] 2 containers: [b7164c1b7732cf6d54642cb9ffc8d9f16696adc0e428c3fa16cf914420d73e1d 59754e94eb3cf3fecfedb9077fbad2ecc2618c351fa9bfa3faed0d780fdd9ecb]
	I0116 03:48:28.573064  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:28.579388  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:28.585318  507339 logs.go:123] Gathering logs for kube-apiserver [de79f87bc28449077936386f603cc5df933f63455bc96cf6ff7e7c6e4bd32bc4] ...
	I0116 03:48:28.585355  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de79f87bc28449077936386f603cc5df933f63455bc96cf6ff7e7c6e4bd32bc4"
	I0116 03:48:28.640376  507339 logs.go:123] Gathering logs for kube-controller-manager [802d4c55aa04370ff079f09dfe4670f5dac1142ca6db880afa17be75f1d20a76] ...
	I0116 03:48:28.640419  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 802d4c55aa04370ff079f09dfe4670f5dac1142ca6db880afa17be75f1d20a76"
	I0116 03:48:28.701292  507339 logs.go:123] Gathering logs for storage-provisioner [59754e94eb3cf3fecfedb9077fbad2ecc2618c351fa9bfa3faed0d780fdd9ecb] ...
	I0116 03:48:28.701332  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59754e94eb3cf3fecfedb9077fbad2ecc2618c351fa9bfa3faed0d780fdd9ecb"
	I0116 03:48:28.744571  507339 logs.go:123] Gathering logs for container status ...
	I0116 03:48:28.744605  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0116 03:48:28.794905  507339 logs.go:123] Gathering logs for kubelet ...
	I0116 03:48:28.794942  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0116 03:48:28.847687  507339 logs.go:123] Gathering logs for dmesg ...
	I0116 03:48:28.847736  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0116 03:48:28.861641  507339 logs.go:123] Gathering logs for describe nodes ...
	I0116 03:48:28.861690  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0116 03:48:29.036673  507339 logs.go:123] Gathering logs for kube-proxy [eba2964f029ac61d9cfb59dc548ae05c832f9b23713f51924291fbfc5de985ed] ...
	I0116 03:48:29.036709  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eba2964f029ac61d9cfb59dc548ae05c832f9b23713f51924291fbfc5de985ed"
	I0116 03:48:29.084792  507339 logs.go:123] Gathering logs for CRI-O ...
	I0116 03:48:29.084823  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0116 03:48:29.449656  507339 logs.go:123] Gathering logs for etcd [01aaf51cd40b9137f2395404bb15b7372927f81dc8a4e884600dc8cba9bbeb8e] ...
	I0116 03:48:29.449707  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 01aaf51cd40b9137f2395404bb15b7372927f81dc8a4e884600dc8cba9bbeb8e"
	I0116 03:48:29.502412  507339 logs.go:123] Gathering logs for storage-provisioner [b7164c1b7732cf6d54642cb9ffc8d9f16696adc0e428c3fa16cf914420d73e1d] ...
	I0116 03:48:29.502460  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b7164c1b7732cf6d54642cb9ffc8d9f16696adc0e428c3fa16cf914420d73e1d"
	I0116 03:48:29.546471  507339 logs.go:123] Gathering logs for coredns [c13ef036a1014a6bbc5c179a0889fb6ff615988713589da58240b7c637adf687] ...
	I0116 03:48:29.546520  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c13ef036a1014a6bbc5c179a0889fb6ff615988713589da58240b7c637adf687"
	I0116 03:48:29.594282  507339 logs.go:123] Gathering logs for kube-scheduler [33381edd7dded1b5ed158738429b4a7f1f03a6171f2af0832bb5a9864c950725] ...
	I0116 03:48:29.594329  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33381edd7dded1b5ed158738429b4a7f1f03a6171f2af0832bb5a9864c950725"
	I0116 03:48:27.867485  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:29.868504  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:27.324987  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:29.325330  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:31.329373  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:32.146165  507339 system_pods.go:59] 8 kube-system pods found
	I0116 03:48:32.146209  507339 system_pods.go:61] "coredns-76f75df574-lr95b" [15dc0b11-f7ec-4729-bbfa-79b9649fbad6] Running
	I0116 03:48:32.146218  507339 system_pods.go:61] "etcd-no-preload-666547" [f98dcc57-5869-4a35-b443-2e6c9beddd88] Running
	I0116 03:48:32.146225  507339 system_pods.go:61] "kube-apiserver-no-preload-666547" [3aceae9d-224a-4e8c-a02d-ad4199d8d558] Running
	I0116 03:48:32.146232  507339 system_pods.go:61] "kube-controller-manager-no-preload-666547" [af39c89c-3b41-4d6f-b075-16de04a3ecc0] Running
	I0116 03:48:32.146238  507339 system_pods.go:61] "kube-proxy-dcmrn" [1e91c96f-cbc5-424d-a09e-06e34bf7a2e2] Running
	I0116 03:48:32.146244  507339 system_pods.go:61] "kube-scheduler-no-preload-666547" [0f2aa713-aebc-4858-a348-32169021235e] Running
	I0116 03:48:32.146253  507339 system_pods.go:61] "metrics-server-57f55c9bc5-78vfj" [dbd2d3b2-ec0f-4253-8549-7c4299522c37] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:48:32.146261  507339 system_pods.go:61] "storage-provisioner" [f4e1ba45-217d-41d5-b583-2f60044879bc] Running
	I0116 03:48:32.146272  507339 system_pods.go:74] duration metric: took 3.979321091s to wait for pod list to return data ...
	I0116 03:48:32.146286  507339 default_sa.go:34] waiting for default service account to be created ...
	I0116 03:48:32.149674  507339 default_sa.go:45] found service account: "default"
	I0116 03:48:32.149702  507339 default_sa.go:55] duration metric: took 3.408362ms for default service account to be created ...
	I0116 03:48:32.149710  507339 system_pods.go:116] waiting for k8s-apps to be running ...
	I0116 03:48:32.160459  507339 system_pods.go:86] 8 kube-system pods found
	I0116 03:48:32.160495  507339 system_pods.go:89] "coredns-76f75df574-lr95b" [15dc0b11-f7ec-4729-bbfa-79b9649fbad6] Running
	I0116 03:48:32.160503  507339 system_pods.go:89] "etcd-no-preload-666547" [f98dcc57-5869-4a35-b443-2e6c9beddd88] Running
	I0116 03:48:32.160510  507339 system_pods.go:89] "kube-apiserver-no-preload-666547" [3aceae9d-224a-4e8c-a02d-ad4199d8d558] Running
	I0116 03:48:32.160518  507339 system_pods.go:89] "kube-controller-manager-no-preload-666547" [af39c89c-3b41-4d6f-b075-16de04a3ecc0] Running
	I0116 03:48:32.160524  507339 system_pods.go:89] "kube-proxy-dcmrn" [1e91c96f-cbc5-424d-a09e-06e34bf7a2e2] Running
	I0116 03:48:32.160529  507339 system_pods.go:89] "kube-scheduler-no-preload-666547" [0f2aa713-aebc-4858-a348-32169021235e] Running
	I0116 03:48:32.160540  507339 system_pods.go:89] "metrics-server-57f55c9bc5-78vfj" [dbd2d3b2-ec0f-4253-8549-7c4299522c37] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:48:32.160548  507339 system_pods.go:89] "storage-provisioner" [f4e1ba45-217d-41d5-b583-2f60044879bc] Running
	I0116 03:48:32.160560  507339 system_pods.go:126] duration metric: took 10.843124ms to wait for k8s-apps to be running ...
	I0116 03:48:32.160569  507339 system_svc.go:44] waiting for kubelet service to be running ....
	I0116 03:48:32.160629  507339 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:48:32.179349  507339 system_svc.go:56] duration metric: took 18.767357ms WaitForService to wait for kubelet.
	I0116 03:48:32.179391  507339 kubeadm.go:581] duration metric: took 4m25.181271548s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0116 03:48:32.179426  507339 node_conditions.go:102] verifying NodePressure condition ...
	I0116 03:48:32.185135  507339 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 03:48:32.185165  507339 node_conditions.go:123] node cpu capacity is 2
	I0116 03:48:32.185198  507339 node_conditions.go:105] duration metric: took 5.766084ms to run NodePressure ...
	I0116 03:48:32.185219  507339 start.go:228] waiting for startup goroutines ...
	I0116 03:48:32.185228  507339 start.go:233] waiting for cluster config update ...
	I0116 03:48:32.185269  507339 start.go:242] writing updated cluster config ...
	I0116 03:48:32.185860  507339 ssh_runner.go:195] Run: rm -f paused
	I0116 03:48:32.243812  507339 start.go:600] kubectl: 1.29.0, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0116 03:48:32.246056  507339 out.go:177] * Done! kubectl is now configured to use "no-preload-666547" cluster and "default" namespace by default
	I0116 03:48:31.940664  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:33.941163  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:31.868778  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:34.367292  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:33.825761  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:35.829060  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:36.440459  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:38.440778  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:36.367672  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:38.867024  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:40.867193  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:38.325077  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:40.326947  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:40.440990  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:42.942197  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:43.365931  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:45.367057  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:42.826200  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:44.827292  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:45.441601  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:47.443035  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:47.367959  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:49.867083  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:47.326224  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:49.326339  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:49.940592  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:51.942424  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:54.440478  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:51.868254  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:54.368867  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:51.825317  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:52.325756  507889 pod_ready.go:81] duration metric: took 4m0.008011182s waiting for pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace to be "Ready" ...
	E0116 03:48:52.325782  507889 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0116 03:48:52.325790  507889 pod_ready.go:38] duration metric: took 4m4.320002841s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:48:52.325804  507889 api_server.go:52] waiting for apiserver process to appear ...
	I0116 03:48:52.325855  507889 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0116 03:48:52.325905  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0116 03:48:52.394600  507889 cri.go:89] found id: "f9861ff0fbab72660648bfcb49b82810bada99f73a55cb0aa359ba114825b8f3"
	I0116 03:48:52.394624  507889 cri.go:89] found id: ""
	I0116 03:48:52.394632  507889 logs.go:284] 1 containers: [f9861ff0fbab72660648bfcb49b82810bada99f73a55cb0aa359ba114825b8f3]
	I0116 03:48:52.394716  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:48:52.400137  507889 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0116 03:48:52.400232  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0116 03:48:52.444453  507889 cri.go:89] found id: "e2758ac4468b10765b35629ffc54228655bc18f79a654cc03e91929ebdc3cf90"
	I0116 03:48:52.444485  507889 cri.go:89] found id: ""
	I0116 03:48:52.444495  507889 logs.go:284] 1 containers: [e2758ac4468b10765b35629ffc54228655bc18f79a654cc03e91929ebdc3cf90]
	I0116 03:48:52.444557  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:48:52.449850  507889 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0116 03:48:52.450002  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0116 03:48:52.499160  507889 cri.go:89] found id: "a07ae23e6e9e3c589264d0be7095896549a4b71fa2d0b06805a3b1b3bea16f6a"
	I0116 03:48:52.499204  507889 cri.go:89] found id: ""
	I0116 03:48:52.499216  507889 logs.go:284] 1 containers: [a07ae23e6e9e3c589264d0be7095896549a4b71fa2d0b06805a3b1b3bea16f6a]
	I0116 03:48:52.499286  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:48:52.504257  507889 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0116 03:48:52.504357  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0116 03:48:52.563747  507889 cri.go:89] found id: "e60387e0e2800d67c99eb3566f31ad675c0db6c22379fc28ee9a447af5f5023c"
	I0116 03:48:52.563782  507889 cri.go:89] found id: ""
	I0116 03:48:52.563790  507889 logs.go:284] 1 containers: [e60387e0e2800d67c99eb3566f31ad675c0db6c22379fc28ee9a447af5f5023c]
	I0116 03:48:52.563860  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:48:52.568676  507889 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0116 03:48:52.568771  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0116 03:48:52.617090  507889 cri.go:89] found id: "44f71a7069827d97c7326cbd36638ec36ff39cbbe0a1e4e61e7c010a38d2e123"
	I0116 03:48:52.617136  507889 cri.go:89] found id: ""
	I0116 03:48:52.617149  507889 logs.go:284] 1 containers: [44f71a7069827d97c7326cbd36638ec36ff39cbbe0a1e4e61e7c010a38d2e123]
	I0116 03:48:52.617222  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:48:52.622121  507889 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0116 03:48:52.622224  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0116 03:48:52.685004  507889 cri.go:89] found id: "1438a3832328a03b94c3d408a2e332c0fb0adab915a9d4f26994e84c0509fac9"
	I0116 03:48:52.685033  507889 cri.go:89] found id: ""
	I0116 03:48:52.685043  507889 logs.go:284] 1 containers: [1438a3832328a03b94c3d408a2e332c0fb0adab915a9d4f26994e84c0509fac9]
	I0116 03:48:52.685113  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:48:52.689837  507889 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0116 03:48:52.689913  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0116 03:48:52.730008  507889 cri.go:89] found id: ""
	I0116 03:48:52.730034  507889 logs.go:284] 0 containers: []
	W0116 03:48:52.730044  507889 logs.go:286] No container was found matching "kindnet"
	I0116 03:48:52.730051  507889 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0116 03:48:52.730120  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0116 03:48:52.780523  507889 cri.go:89] found id: "33ba3a03d878abc86a194defd715bb61a0066e49063e3823002bd3ec421da138"
	I0116 03:48:52.780554  507889 cri.go:89] found id: "a4b27881ef90cea325e8f306fac88a76052eec15a693c291288664b8f4ebcc2a"
	I0116 03:48:52.780562  507889 cri.go:89] found id: ""
	I0116 03:48:52.780571  507889 logs.go:284] 2 containers: [33ba3a03d878abc86a194defd715bb61a0066e49063e3823002bd3ec421da138 a4b27881ef90cea325e8f306fac88a76052eec15a693c291288664b8f4ebcc2a]
	I0116 03:48:52.780641  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:48:52.787305  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:48:52.791352  507889 logs.go:123] Gathering logs for kubelet ...
	I0116 03:48:52.791383  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0116 03:48:52.859099  507889 logs.go:123] Gathering logs for etcd [e2758ac4468b10765b35629ffc54228655bc18f79a654cc03e91929ebdc3cf90] ...
	I0116 03:48:52.859152  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2758ac4468b10765b35629ffc54228655bc18f79a654cc03e91929ebdc3cf90"
	I0116 03:48:52.912806  507889 logs.go:123] Gathering logs for coredns [a07ae23e6e9e3c589264d0be7095896549a4b71fa2d0b06805a3b1b3bea16f6a] ...
	I0116 03:48:52.912852  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a07ae23e6e9e3c589264d0be7095896549a4b71fa2d0b06805a3b1b3bea16f6a"
	I0116 03:48:52.960880  507889 logs.go:123] Gathering logs for kube-controller-manager [1438a3832328a03b94c3d408a2e332c0fb0adab915a9d4f26994e84c0509fac9] ...
	I0116 03:48:52.960919  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1438a3832328a03b94c3d408a2e332c0fb0adab915a9d4f26994e84c0509fac9"
	I0116 03:48:53.023064  507889 logs.go:123] Gathering logs for CRI-O ...
	I0116 03:48:53.023110  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0116 03:48:53.524890  507889 logs.go:123] Gathering logs for kube-apiserver [f9861ff0fbab72660648bfcb49b82810bada99f73a55cb0aa359ba114825b8f3] ...
	I0116 03:48:53.524934  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9861ff0fbab72660648bfcb49b82810bada99f73a55cb0aa359ba114825b8f3"
	I0116 03:48:53.587550  507889 logs.go:123] Gathering logs for kube-scheduler [e60387e0e2800d67c99eb3566f31ad675c0db6c22379fc28ee9a447af5f5023c] ...
	I0116 03:48:53.587594  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e60387e0e2800d67c99eb3566f31ad675c0db6c22379fc28ee9a447af5f5023c"
	I0116 03:48:53.627986  507889 logs.go:123] Gathering logs for kube-proxy [44f71a7069827d97c7326cbd36638ec36ff39cbbe0a1e4e61e7c010a38d2e123] ...
	I0116 03:48:53.628029  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44f71a7069827d97c7326cbd36638ec36ff39cbbe0a1e4e61e7c010a38d2e123"
	I0116 03:48:53.671704  507889 logs.go:123] Gathering logs for dmesg ...
	I0116 03:48:53.671739  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0116 03:48:53.686333  507889 logs.go:123] Gathering logs for describe nodes ...
	I0116 03:48:53.686370  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0116 03:48:53.855391  507889 logs.go:123] Gathering logs for storage-provisioner [33ba3a03d878abc86a194defd715bb61a0066e49063e3823002bd3ec421da138] ...
	I0116 03:48:53.855435  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33ba3a03d878abc86a194defd715bb61a0066e49063e3823002bd3ec421da138"
	I0116 03:48:53.906028  507889 logs.go:123] Gathering logs for storage-provisioner [a4b27881ef90cea325e8f306fac88a76052eec15a693c291288664b8f4ebcc2a] ...
	I0116 03:48:53.906064  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4b27881ef90cea325e8f306fac88a76052eec15a693c291288664b8f4ebcc2a"
	I0116 03:48:53.945386  507889 logs.go:123] Gathering logs for container status ...
	I0116 03:48:53.945419  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0116 03:48:56.498685  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:48:56.516768  507889 api_server.go:72] duration metric: took 4m13.505914609s to wait for apiserver process to appear ...
	I0116 03:48:56.516797  507889 api_server.go:88] waiting for apiserver healthz status ...
	I0116 03:48:56.516836  507889 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0116 03:48:56.516907  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0116 03:48:56.563236  507889 cri.go:89] found id: "f9861ff0fbab72660648bfcb49b82810bada99f73a55cb0aa359ba114825b8f3"
	I0116 03:48:56.563272  507889 cri.go:89] found id: ""
	I0116 03:48:56.563283  507889 logs.go:284] 1 containers: [f9861ff0fbab72660648bfcb49b82810bada99f73a55cb0aa359ba114825b8f3]
	I0116 03:48:56.563356  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:48:56.568012  507889 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0116 03:48:56.568188  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0116 03:48:56.443226  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:58.940353  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:56.868597  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:59.366906  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:56.613095  507889 cri.go:89] found id: "e2758ac4468b10765b35629ffc54228655bc18f79a654cc03e91929ebdc3cf90"
	I0116 03:48:56.613120  507889 cri.go:89] found id: ""
	I0116 03:48:56.613129  507889 logs.go:284] 1 containers: [e2758ac4468b10765b35629ffc54228655bc18f79a654cc03e91929ebdc3cf90]
	I0116 03:48:56.613190  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:48:56.618736  507889 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0116 03:48:56.618827  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0116 03:48:56.672773  507889 cri.go:89] found id: "a07ae23e6e9e3c589264d0be7095896549a4b71fa2d0b06805a3b1b3bea16f6a"
	I0116 03:48:56.672796  507889 cri.go:89] found id: ""
	I0116 03:48:56.672805  507889 logs.go:284] 1 containers: [a07ae23e6e9e3c589264d0be7095896549a4b71fa2d0b06805a3b1b3bea16f6a]
	I0116 03:48:56.672855  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:48:56.679218  507889 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0116 03:48:56.679293  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0116 03:48:56.724517  507889 cri.go:89] found id: "e60387e0e2800d67c99eb3566f31ad675c0db6c22379fc28ee9a447af5f5023c"
	I0116 03:48:56.724547  507889 cri.go:89] found id: ""
	I0116 03:48:56.724555  507889 logs.go:284] 1 containers: [e60387e0e2800d67c99eb3566f31ad675c0db6c22379fc28ee9a447af5f5023c]
	I0116 03:48:56.724622  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:48:56.730061  507889 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0116 03:48:56.730146  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0116 03:48:56.775380  507889 cri.go:89] found id: "44f71a7069827d97c7326cbd36638ec36ff39cbbe0a1e4e61e7c010a38d2e123"
	I0116 03:48:56.775413  507889 cri.go:89] found id: ""
	I0116 03:48:56.775423  507889 logs.go:284] 1 containers: [44f71a7069827d97c7326cbd36638ec36ff39cbbe0a1e4e61e7c010a38d2e123]
	I0116 03:48:56.775494  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:48:56.781085  507889 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0116 03:48:56.781183  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0116 03:48:56.830030  507889 cri.go:89] found id: "1438a3832328a03b94c3d408a2e332c0fb0adab915a9d4f26994e84c0509fac9"
	I0116 03:48:56.830067  507889 cri.go:89] found id: ""
	I0116 03:48:56.830076  507889 logs.go:284] 1 containers: [1438a3832328a03b94c3d408a2e332c0fb0adab915a9d4f26994e84c0509fac9]
	I0116 03:48:56.830163  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:48:56.834956  507889 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0116 03:48:56.835035  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0116 03:48:56.882972  507889 cri.go:89] found id: ""
	I0116 03:48:56.883001  507889 logs.go:284] 0 containers: []
	W0116 03:48:56.883013  507889 logs.go:286] No container was found matching "kindnet"
	I0116 03:48:56.883022  507889 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0116 03:48:56.883095  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0116 03:48:56.925520  507889 cri.go:89] found id: "33ba3a03d878abc86a194defd715bb61a0066e49063e3823002bd3ec421da138"
	I0116 03:48:56.925553  507889 cri.go:89] found id: "a4b27881ef90cea325e8f306fac88a76052eec15a693c291288664b8f4ebcc2a"
	I0116 03:48:56.925560  507889 cri.go:89] found id: ""
	I0116 03:48:56.925574  507889 logs.go:284] 2 containers: [33ba3a03d878abc86a194defd715bb61a0066e49063e3823002bd3ec421da138 a4b27881ef90cea325e8f306fac88a76052eec15a693c291288664b8f4ebcc2a]
	I0116 03:48:56.925656  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:48:56.931331  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:48:56.936492  507889 logs.go:123] Gathering logs for storage-provisioner [a4b27881ef90cea325e8f306fac88a76052eec15a693c291288664b8f4ebcc2a] ...
	I0116 03:48:56.936527  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4b27881ef90cea325e8f306fac88a76052eec15a693c291288664b8f4ebcc2a"
	I0116 03:48:56.981819  507889 logs.go:123] Gathering logs for kubelet ...
	I0116 03:48:56.981851  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0116 03:48:57.045678  507889 logs.go:123] Gathering logs for dmesg ...
	I0116 03:48:57.045723  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0116 03:48:57.060832  507889 logs.go:123] Gathering logs for etcd [e2758ac4468b10765b35629ffc54228655bc18f79a654cc03e91929ebdc3cf90] ...
	I0116 03:48:57.060872  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2758ac4468b10765b35629ffc54228655bc18f79a654cc03e91929ebdc3cf90"
	I0116 03:48:57.123644  507889 logs.go:123] Gathering logs for kube-scheduler [e60387e0e2800d67c99eb3566f31ad675c0db6c22379fc28ee9a447af5f5023c] ...
	I0116 03:48:57.123695  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e60387e0e2800d67c99eb3566f31ad675c0db6c22379fc28ee9a447af5f5023c"
	I0116 03:48:57.170173  507889 logs.go:123] Gathering logs for kube-proxy [44f71a7069827d97c7326cbd36638ec36ff39cbbe0a1e4e61e7c010a38d2e123] ...
	I0116 03:48:57.170216  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44f71a7069827d97c7326cbd36638ec36ff39cbbe0a1e4e61e7c010a38d2e123"
	I0116 03:48:57.215434  507889 logs.go:123] Gathering logs for describe nodes ...
	I0116 03:48:57.215470  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0116 03:48:57.370036  507889 logs.go:123] Gathering logs for kube-controller-manager [1438a3832328a03b94c3d408a2e332c0fb0adab915a9d4f26994e84c0509fac9] ...
	I0116 03:48:57.370081  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1438a3832328a03b94c3d408a2e332c0fb0adab915a9d4f26994e84c0509fac9"
	I0116 03:48:57.432988  507889 logs.go:123] Gathering logs for container status ...
	I0116 03:48:57.433048  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0116 03:48:57.485239  507889 logs.go:123] Gathering logs for kube-apiserver [f9861ff0fbab72660648bfcb49b82810bada99f73a55cb0aa359ba114825b8f3] ...
	I0116 03:48:57.485284  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9861ff0fbab72660648bfcb49b82810bada99f73a55cb0aa359ba114825b8f3"
	I0116 03:48:57.547192  507889 logs.go:123] Gathering logs for coredns [a07ae23e6e9e3c589264d0be7095896549a4b71fa2d0b06805a3b1b3bea16f6a] ...
	I0116 03:48:57.547237  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a07ae23e6e9e3c589264d0be7095896549a4b71fa2d0b06805a3b1b3bea16f6a"
	I0116 03:48:57.598025  507889 logs.go:123] Gathering logs for storage-provisioner [33ba3a03d878abc86a194defd715bb61a0066e49063e3823002bd3ec421da138] ...
	I0116 03:48:57.598085  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33ba3a03d878abc86a194defd715bb61a0066e49063e3823002bd3ec421da138"
	I0116 03:48:57.644234  507889 logs.go:123] Gathering logs for CRI-O ...
	I0116 03:48:57.644271  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0116 03:49:00.562219  507889 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8444/healthz ...
	I0116 03:49:00.568196  507889 api_server.go:279] https://192.168.50.236:8444/healthz returned 200:
	ok
	I0116 03:49:00.571612  507889 api_server.go:141] control plane version: v1.28.4
	I0116 03:49:00.571655  507889 api_server.go:131] duration metric: took 4.0548511s to wait for apiserver health ...
	I0116 03:49:00.571668  507889 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 03:49:00.571701  507889 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0116 03:49:00.571774  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0116 03:49:00.623308  507889 cri.go:89] found id: "f9861ff0fbab72660648bfcb49b82810bada99f73a55cb0aa359ba114825b8f3"
	I0116 03:49:00.623344  507889 cri.go:89] found id: ""
	I0116 03:49:00.623355  507889 logs.go:284] 1 containers: [f9861ff0fbab72660648bfcb49b82810bada99f73a55cb0aa359ba114825b8f3]
	I0116 03:49:00.623418  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:49:00.630287  507889 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0116 03:49:00.630381  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0116 03:49:00.673225  507889 cri.go:89] found id: "e2758ac4468b10765b35629ffc54228655bc18f79a654cc03e91929ebdc3cf90"
	I0116 03:49:00.673265  507889 cri.go:89] found id: ""
	I0116 03:49:00.673276  507889 logs.go:284] 1 containers: [e2758ac4468b10765b35629ffc54228655bc18f79a654cc03e91929ebdc3cf90]
	I0116 03:49:00.673334  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:49:00.678677  507889 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0116 03:49:00.678768  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0116 03:49:00.723055  507889 cri.go:89] found id: "a07ae23e6e9e3c589264d0be7095896549a4b71fa2d0b06805a3b1b3bea16f6a"
	I0116 03:49:00.723081  507889 cri.go:89] found id: ""
	I0116 03:49:00.723089  507889 logs.go:284] 1 containers: [a07ae23e6e9e3c589264d0be7095896549a4b71fa2d0b06805a3b1b3bea16f6a]
	I0116 03:49:00.723148  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:49:00.727931  507889 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0116 03:49:00.728053  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0116 03:49:00.777602  507889 cri.go:89] found id: "e60387e0e2800d67c99eb3566f31ad675c0db6c22379fc28ee9a447af5f5023c"
	I0116 03:49:00.777639  507889 cri.go:89] found id: ""
	I0116 03:49:00.777651  507889 logs.go:284] 1 containers: [e60387e0e2800d67c99eb3566f31ad675c0db6c22379fc28ee9a447af5f5023c]
	I0116 03:49:00.777723  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:49:00.787121  507889 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0116 03:49:00.787206  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0116 03:49:00.835268  507889 cri.go:89] found id: "44f71a7069827d97c7326cbd36638ec36ff39cbbe0a1e4e61e7c010a38d2e123"
	I0116 03:49:00.835300  507889 cri.go:89] found id: ""
	I0116 03:49:00.835310  507889 logs.go:284] 1 containers: [44f71a7069827d97c7326cbd36638ec36ff39cbbe0a1e4e61e7c010a38d2e123]
	I0116 03:49:00.835378  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:49:00.842204  507889 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0116 03:49:00.842299  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0116 03:49:00.889511  507889 cri.go:89] found id: "1438a3832328a03b94c3d408a2e332c0fb0adab915a9d4f26994e84c0509fac9"
	I0116 03:49:00.889541  507889 cri.go:89] found id: ""
	I0116 03:49:00.889551  507889 logs.go:284] 1 containers: [1438a3832328a03b94c3d408a2e332c0fb0adab915a9d4f26994e84c0509fac9]
	I0116 03:49:00.889620  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:49:00.894964  507889 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0116 03:49:00.895059  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0116 03:49:00.937187  507889 cri.go:89] found id: ""
	I0116 03:49:00.937221  507889 logs.go:284] 0 containers: []
	W0116 03:49:00.937237  507889 logs.go:286] No container was found matching "kindnet"
	I0116 03:49:00.937246  507889 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0116 03:49:00.937313  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0116 03:49:00.977711  507889 cri.go:89] found id: "33ba3a03d878abc86a194defd715bb61a0066e49063e3823002bd3ec421da138"
	I0116 03:49:00.977740  507889 cri.go:89] found id: "a4b27881ef90cea325e8f306fac88a76052eec15a693c291288664b8f4ebcc2a"
	I0116 03:49:00.977748  507889 cri.go:89] found id: ""
	I0116 03:49:00.977756  507889 logs.go:284] 2 containers: [33ba3a03d878abc86a194defd715bb61a0066e49063e3823002bd3ec421da138 a4b27881ef90cea325e8f306fac88a76052eec15a693c291288664b8f4ebcc2a]
	I0116 03:49:00.977834  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:49:00.982886  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:49:00.988008  507889 logs.go:123] Gathering logs for describe nodes ...
	I0116 03:49:00.988061  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0116 03:49:01.115755  507889 logs.go:123] Gathering logs for dmesg ...
	I0116 03:49:01.115791  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0116 03:49:01.131706  507889 logs.go:123] Gathering logs for etcd [e2758ac4468b10765b35629ffc54228655bc18f79a654cc03e91929ebdc3cf90] ...
	I0116 03:49:01.131748  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2758ac4468b10765b35629ffc54228655bc18f79a654cc03e91929ebdc3cf90"
	I0116 03:49:01.186279  507889 logs.go:123] Gathering logs for kube-scheduler [e60387e0e2800d67c99eb3566f31ad675c0db6c22379fc28ee9a447af5f5023c] ...
	I0116 03:49:01.186324  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e60387e0e2800d67c99eb3566f31ad675c0db6c22379fc28ee9a447af5f5023c"
	I0116 03:49:01.231057  507889 logs.go:123] Gathering logs for kube-controller-manager [1438a3832328a03b94c3d408a2e332c0fb0adab915a9d4f26994e84c0509fac9] ...
	I0116 03:49:01.231100  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1438a3832328a03b94c3d408a2e332c0fb0adab915a9d4f26994e84c0509fac9"
	I0116 03:49:01.307541  507889 logs.go:123] Gathering logs for storage-provisioner [33ba3a03d878abc86a194defd715bb61a0066e49063e3823002bd3ec421da138] ...
	I0116 03:49:01.307586  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33ba3a03d878abc86a194defd715bb61a0066e49063e3823002bd3ec421da138"
	I0116 03:49:01.356517  507889 logs.go:123] Gathering logs for storage-provisioner [a4b27881ef90cea325e8f306fac88a76052eec15a693c291288664b8f4ebcc2a] ...
	I0116 03:49:01.356563  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4b27881ef90cea325e8f306fac88a76052eec15a693c291288664b8f4ebcc2a"
	I0116 03:49:01.409790  507889 logs.go:123] Gathering logs for kube-apiserver [f9861ff0fbab72660648bfcb49b82810bada99f73a55cb0aa359ba114825b8f3] ...
	I0116 03:49:01.409846  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9861ff0fbab72660648bfcb49b82810bada99f73a55cb0aa359ba114825b8f3"
	I0116 03:49:01.462029  507889 logs.go:123] Gathering logs for CRI-O ...
	I0116 03:49:01.462077  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0116 03:49:00.942100  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:49:02.942316  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:49:01.838933  507889 logs.go:123] Gathering logs for coredns [a07ae23e6e9e3c589264d0be7095896549a4b71fa2d0b06805a3b1b3bea16f6a] ...
	I0116 03:49:01.838999  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a07ae23e6e9e3c589264d0be7095896549a4b71fa2d0b06805a3b1b3bea16f6a"
	I0116 03:49:01.884022  507889 logs.go:123] Gathering logs for kube-proxy [44f71a7069827d97c7326cbd36638ec36ff39cbbe0a1e4e61e7c010a38d2e123] ...
	I0116 03:49:01.884075  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44f71a7069827d97c7326cbd36638ec36ff39cbbe0a1e4e61e7c010a38d2e123"
	I0116 03:49:01.930032  507889 logs.go:123] Gathering logs for container status ...
	I0116 03:49:01.930090  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0116 03:49:01.998827  507889 logs.go:123] Gathering logs for kubelet ...
	I0116 03:49:01.998863  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0116 03:49:04.573529  507889 system_pods.go:59] 8 kube-system pods found
	I0116 03:49:04.573571  507889 system_pods.go:61] "coredns-5dd5756b68-pmx8n" [9d3c9941-8938-4d4a-b6d8-9b542ca6f1ca] Running
	I0116 03:49:04.573579  507889 system_pods.go:61] "etcd-default-k8s-diff-port-434445" [a2327159-d034-4f62-a8f5-14062a41c75d] Running
	I0116 03:49:04.573587  507889 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-434445" [75c5fc5e-86b1-42cf-bb5c-8a1f8b4e13db] Running
	I0116 03:49:04.573594  507889 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-434445" [2568dd4f-124d-4c4f-8651-3b89b2b54983] Running
	I0116 03:49:04.573600  507889 system_pods.go:61] "kube-proxy-dcbqg" [eba1f9bf-6aa7-40cd-b57c-745c2d0cc414] Running
	I0116 03:49:04.573607  507889 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-434445" [479952a1-8c3d-49b4-bd22-fef988e6b83d] Running
	I0116 03:49:04.573617  507889 system_pods.go:61] "metrics-server-57f55c9bc5-894n2" [46e4892a-d026-4a9d-88bc-128e92848808] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:49:04.573626  507889 system_pods.go:61] "storage-provisioner" [16fd4585-3d75-40c3-a28d-4134375f4e3d] Running
	I0116 03:49:04.573638  507889 system_pods.go:74] duration metric: took 4.001961367s to wait for pod list to return data ...
	I0116 03:49:04.573657  507889 default_sa.go:34] waiting for default service account to be created ...
	I0116 03:49:04.577012  507889 default_sa.go:45] found service account: "default"
	I0116 03:49:04.577041  507889 default_sa.go:55] duration metric: took 3.376395ms for default service account to be created ...
	I0116 03:49:04.577051  507889 system_pods.go:116] waiting for k8s-apps to be running ...
	I0116 03:49:04.583833  507889 system_pods.go:86] 8 kube-system pods found
	I0116 03:49:04.583880  507889 system_pods.go:89] "coredns-5dd5756b68-pmx8n" [9d3c9941-8938-4d4a-b6d8-9b542ca6f1ca] Running
	I0116 03:49:04.583890  507889 system_pods.go:89] "etcd-default-k8s-diff-port-434445" [a2327159-d034-4f62-a8f5-14062a41c75d] Running
	I0116 03:49:04.583898  507889 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-434445" [75c5fc5e-86b1-42cf-bb5c-8a1f8b4e13db] Running
	I0116 03:49:04.583905  507889 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-434445" [2568dd4f-124d-4c4f-8651-3b89b2b54983] Running
	I0116 03:49:04.583911  507889 system_pods.go:89] "kube-proxy-dcbqg" [eba1f9bf-6aa7-40cd-b57c-745c2d0cc414] Running
	I0116 03:49:04.583918  507889 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-434445" [479952a1-8c3d-49b4-bd22-fef988e6b83d] Running
	I0116 03:49:04.583928  507889 system_pods.go:89] "metrics-server-57f55c9bc5-894n2" [46e4892a-d026-4a9d-88bc-128e92848808] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:49:04.583936  507889 system_pods.go:89] "storage-provisioner" [16fd4585-3d75-40c3-a28d-4134375f4e3d] Running
	I0116 03:49:04.583950  507889 system_pods.go:126] duration metric: took 6.89136ms to wait for k8s-apps to be running ...
	I0116 03:49:04.583964  507889 system_svc.go:44] waiting for kubelet service to be running ....
	I0116 03:49:04.584016  507889 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:49:04.600209  507889 system_svc.go:56] duration metric: took 16.229333ms WaitForService to wait for kubelet.
	I0116 03:49:04.600252  507889 kubeadm.go:581] duration metric: took 4m21.589410808s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0116 03:49:04.600285  507889 node_conditions.go:102] verifying NodePressure condition ...
	I0116 03:49:04.603774  507889 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 03:49:04.603803  507889 node_conditions.go:123] node cpu capacity is 2
	I0116 03:49:04.603815  507889 node_conditions.go:105] duration metric: took 3.52526ms to run NodePressure ...
	I0116 03:49:04.603829  507889 start.go:228] waiting for startup goroutines ...
	I0116 03:49:04.603836  507889 start.go:233] waiting for cluster config update ...
	I0116 03:49:04.603849  507889 start.go:242] writing updated cluster config ...
	I0116 03:49:04.604185  507889 ssh_runner.go:195] Run: rm -f paused
	I0116 03:49:04.658922  507889 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0116 03:49:04.661265  507889 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-434445" cluster and "default" namespace by default
	I0116 03:49:01.367935  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:49:03.867391  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:49:05.867519  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:49:05.440602  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:49:07.441041  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:49:08.434235  507510 pod_ready.go:81] duration metric: took 4m0.001038173s waiting for pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace to be "Ready" ...
	E0116 03:49:08.434278  507510 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace to be "Ready" (will not retry!)
	I0116 03:49:08.434304  507510 pod_ready.go:38] duration metric: took 4m1.20014772s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:49:08.434338  507510 kubeadm.go:640] restartCluster took 5m11.767236835s
	W0116 03:49:08.434423  507510 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0116 03:49:08.434463  507510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0116 03:49:07.868307  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:49:10.367347  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:49:15.339252  507510 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (6.904753674s)
	I0116 03:49:15.339341  507510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:49:15.355684  507510 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 03:49:15.371377  507510 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 03:49:15.393609  507510 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 03:49:15.393674  507510 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0116 03:49:15.478382  507510 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0116 03:49:15.478464  507510 kubeadm.go:322] [preflight] Running pre-flight checks
	I0116 03:49:15.663487  507510 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0116 03:49:15.663663  507510 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0116 03:49:15.663803  507510 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0116 03:49:15.940677  507510 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0116 03:49:15.940857  507510 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0116 03:49:15.949553  507510 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0116 03:49:16.075111  507510 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0116 03:49:12.867512  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:49:13.859320  507257 pod_ready.go:81] duration metric: took 4m0.000451049s waiting for pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace to be "Ready" ...
	E0116 03:49:13.859353  507257 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace to be "Ready" (will not retry!)
	I0116 03:49:13.859375  507257 pod_ready.go:38] duration metric: took 4m12.063407854s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:49:13.859418  507257 kubeadm.go:640] restartCluster took 4m32.047022773s
	W0116 03:49:13.859484  507257 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0116 03:49:13.859513  507257 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0116 03:49:16.077099  507510 out.go:204]   - Generating certificates and keys ...
	I0116 03:49:16.077224  507510 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0116 03:49:16.077305  507510 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0116 03:49:16.077410  507510 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0116 03:49:16.077504  507510 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0116 03:49:16.077617  507510 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0116 03:49:16.077745  507510 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0116 03:49:16.078085  507510 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0116 03:49:16.078639  507510 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0116 03:49:16.079112  507510 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0116 03:49:16.079719  507510 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0116 03:49:16.079935  507510 kubeadm.go:322] [certs] Using the existing "sa" key
	I0116 03:49:16.080015  507510 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0116 03:49:16.246902  507510 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0116 03:49:16.332722  507510 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0116 03:49:16.534277  507510 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0116 03:49:16.908642  507510 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0116 03:49:16.909711  507510 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0116 03:49:16.911960  507510 out.go:204]   - Booting up control plane ...
	I0116 03:49:16.912103  507510 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0116 03:49:16.923200  507510 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0116 03:49:16.924797  507510 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0116 03:49:16.926738  507510 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0116 03:49:16.937544  507510 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0116 03:49:27.943253  507510 kubeadm.go:322] [apiclient] All control plane components are healthy after 11.005405 seconds
	I0116 03:49:27.943474  507510 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0116 03:49:27.970644  507510 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I0116 03:49:28.500660  507510 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0116 03:49:28.500847  507510 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-696770 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0116 03:49:29.015036  507510 kubeadm.go:322] [bootstrap-token] Using token: nr2yh0.22ni19zxk2s7hw9l
	I0116 03:49:28.504409  507257 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.644866985s)
	I0116 03:49:28.504498  507257 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:49:28.519788  507257 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 03:49:28.531667  507257 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 03:49:28.543058  507257 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 03:49:28.543113  507257 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0116 03:49:28.603369  507257 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0116 03:49:28.603521  507257 kubeadm.go:322] [preflight] Running pre-flight checks
	I0116 03:49:28.784258  507257 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0116 03:49:28.784384  507257 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0116 03:49:28.784491  507257 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0116 03:49:29.068390  507257 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0116 03:49:29.017077  507510 out.go:204]   - Configuring RBAC rules ...
	I0116 03:49:29.017276  507510 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0116 03:49:29.044200  507510 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0116 03:49:29.049807  507510 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0116 03:49:29.054441  507510 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0116 03:49:29.057939  507510 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0116 03:49:29.142810  507510 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0116 03:49:29.439580  507510 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0116 03:49:29.441665  507510 kubeadm.go:322] 
	I0116 03:49:29.441736  507510 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0116 03:49:29.441741  507510 kubeadm.go:322] 
	I0116 03:49:29.441863  507510 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0116 03:49:29.441898  507510 kubeadm.go:322] 
	I0116 03:49:29.441932  507510 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0116 03:49:29.441999  507510 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0116 03:49:29.442057  507510 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0116 03:49:29.442099  507510 kubeadm.go:322] 
	I0116 03:49:29.442200  507510 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0116 03:49:29.442306  507510 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0116 03:49:29.442414  507510 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0116 03:49:29.442429  507510 kubeadm.go:322] 
	I0116 03:49:29.442566  507510 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I0116 03:49:29.442689  507510 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0116 03:49:29.442701  507510 kubeadm.go:322] 
	I0116 03:49:29.442813  507510 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token nr2yh0.22ni19zxk2s7hw9l \
	I0116 03:49:29.442967  507510 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:ab75761e1119a1709a216b682cd7b1dcce0641115df5f236b54b16e4f66aa044 \
	I0116 03:49:29.443008  507510 kubeadm.go:322]     --control-plane 	  
	I0116 03:49:29.443024  507510 kubeadm.go:322] 
	I0116 03:49:29.443147  507510 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0116 03:49:29.443159  507510 kubeadm.go:322] 
	I0116 03:49:29.443285  507510 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token nr2yh0.22ni19zxk2s7hw9l \
	I0116 03:49:29.443414  507510 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:ab75761e1119a1709a216b682cd7b1dcce0641115df5f236b54b16e4f66aa044 
	I0116 03:49:29.444142  507510 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0116 03:49:29.444278  507510 cni.go:84] Creating CNI manager for ""
	I0116 03:49:29.444302  507510 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:49:29.446569  507510 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 03:49:29.447957  507510 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 03:49:29.457418  507510 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 03:49:29.478015  507510 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0116 03:49:29.478130  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:29.478135  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=6e8fa5f64d0e7272be43ff25ed3826261f0a2578 minikube.k8s.io/name=old-k8s-version-696770 minikube.k8s.io/updated_at=2024_01_16T03_49_29_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:29.070681  507257 out.go:204]   - Generating certificates and keys ...
	I0116 03:49:29.070805  507257 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0116 03:49:29.070882  507257 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0116 03:49:29.071007  507257 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0116 03:49:29.071108  507257 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0116 03:49:29.071243  507257 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0116 03:49:29.071320  507257 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0116 03:49:29.071422  507257 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0116 03:49:29.071497  507257 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0116 03:49:29.071928  507257 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0116 03:49:29.074454  507257 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0116 03:49:29.076202  507257 kubeadm.go:322] [certs] Using the existing "sa" key
	I0116 03:49:29.076435  507257 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0116 03:49:29.360527  507257 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0116 03:49:29.779361  507257 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0116 03:49:29.976749  507257 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0116 03:49:30.075605  507257 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0116 03:49:30.076375  507257 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0116 03:49:30.079235  507257 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0116 03:49:30.081497  507257 out.go:204]   - Booting up control plane ...
	I0116 03:49:30.081645  507257 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0116 03:49:30.082340  507257 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0116 03:49:30.083349  507257 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0116 03:49:30.103660  507257 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0116 03:49:30.104863  507257 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0116 03:49:30.104924  507257 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0116 03:49:30.229980  507257 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0116 03:49:29.724417  507510 ops.go:34] apiserver oom_adj: -16
	I0116 03:49:29.724549  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:30.224988  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:30.725451  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:31.225287  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:31.724689  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:32.224984  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:32.724769  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:33.225547  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:33.724874  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:34.225301  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:34.725134  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:35.224977  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:35.724998  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:36.225495  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:36.725043  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:37.224700  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:37.725397  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:38.225311  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:38.725308  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:39.224885  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:38.732431  507257 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.502537 seconds
	I0116 03:49:38.732591  507257 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0116 03:49:38.766319  507257 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0116 03:49:39.312926  507257 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0116 03:49:39.313225  507257 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-615980 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0116 03:49:39.836927  507257 kubeadm.go:322] [bootstrap-token] Using token: 8bzdm1.4lwyoxck7xjn6vqr
	I0116 03:49:39.838931  507257 out.go:204]   - Configuring RBAC rules ...
	I0116 03:49:39.839093  507257 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0116 03:49:39.850909  507257 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0116 03:49:39.873417  507257 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0116 03:49:39.879093  507257 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0116 03:49:39.883914  507257 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0116 03:49:39.889130  507257 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0116 03:49:39.910444  507257 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0116 03:49:40.235572  507257 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0116 03:49:40.334951  507257 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0116 03:49:40.335000  507257 kubeadm.go:322] 
	I0116 03:49:40.335092  507257 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0116 03:49:40.335103  507257 kubeadm.go:322] 
	I0116 03:49:40.335212  507257 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0116 03:49:40.335222  507257 kubeadm.go:322] 
	I0116 03:49:40.335266  507257 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0116 03:49:40.335353  507257 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0116 03:49:40.335421  507257 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0116 03:49:40.335430  507257 kubeadm.go:322] 
	I0116 03:49:40.335504  507257 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0116 03:49:40.335513  507257 kubeadm.go:322] 
	I0116 03:49:40.335598  507257 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0116 03:49:40.335618  507257 kubeadm.go:322] 
	I0116 03:49:40.335690  507257 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0116 03:49:40.335793  507257 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0116 03:49:40.335891  507257 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0116 03:49:40.335904  507257 kubeadm.go:322] 
	I0116 03:49:40.336008  507257 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0116 03:49:40.336128  507257 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0116 03:49:40.336143  507257 kubeadm.go:322] 
	I0116 03:49:40.336262  507257 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 8bzdm1.4lwyoxck7xjn6vqr \
	I0116 03:49:40.336427  507257 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ab75761e1119a1709a216b682cd7b1dcce0641115df5f236b54b16e4f66aa044 \
	I0116 03:49:40.336456  507257 kubeadm.go:322] 	--control-plane 
	I0116 03:49:40.336463  507257 kubeadm.go:322] 
	I0116 03:49:40.336594  507257 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0116 03:49:40.336611  507257 kubeadm.go:322] 
	I0116 03:49:40.336744  507257 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 8bzdm1.4lwyoxck7xjn6vqr \
	I0116 03:49:40.336876  507257 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ab75761e1119a1709a216b682cd7b1dcce0641115df5f236b54b16e4f66aa044 
	I0116 03:49:40.337377  507257 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0116 03:49:40.337421  507257 cni.go:84] Creating CNI manager for ""
	I0116 03:49:40.337432  507257 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:49:40.340415  507257 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 03:49:40.341952  507257 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 03:49:40.376620  507257 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 03:49:40.459091  507257 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0116 03:49:40.459177  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:40.459233  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=6e8fa5f64d0e7272be43ff25ed3826261f0a2578 minikube.k8s.io/name=embed-certs-615980 minikube.k8s.io/updated_at=2024_01_16T03_49_40_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:40.524693  507257 ops.go:34] apiserver oom_adj: -16
	I0116 03:49:40.917890  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:39.725272  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:40.225380  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:40.725272  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:41.225258  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:41.725525  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:42.225270  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:42.725463  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:43.224674  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:43.724904  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:44.224946  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:44.725197  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:44.843354  507510 kubeadm.go:1088] duration metric: took 15.365308355s to wait for elevateKubeSystemPrivileges.
	I0116 03:49:44.843465  507510 kubeadm.go:406] StartCluster complete in 5m48.250275121s
	I0116 03:49:44.843545  507510 settings.go:142] acquiring lock: {Name:mk3ded99c06d285b174e76622bb0741c8dd2d8fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:49:44.843708  507510 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17965-468241/kubeconfig
	I0116 03:49:44.846444  507510 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-468241/kubeconfig: {Name:mkac3c63bbcba9b4fa1bea3480797757d703fb9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:49:44.846814  507510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0116 03:49:44.846959  507510 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0116 03:49:44.847043  507510 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-696770"
	I0116 03:49:44.847067  507510 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-696770"
	I0116 03:49:44.847065  507510 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-696770"
	W0116 03:49:44.847076  507510 addons.go:243] addon storage-provisioner should already be in state true
	I0116 03:49:44.847079  507510 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-696770"
	I0116 03:49:44.847099  507510 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-696770"
	I0116 03:49:44.847108  507510 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-696770"
	W0116 03:49:44.847130  507510 addons.go:243] addon metrics-server should already be in state true
	I0116 03:49:44.847152  507510 host.go:66] Checking if "old-k8s-version-696770" exists ...
	I0116 03:49:44.847087  507510 config.go:182] Loaded profile config "old-k8s-version-696770": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0116 03:49:44.847178  507510 host.go:66] Checking if "old-k8s-version-696770" exists ...
	I0116 03:49:44.847548  507510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:49:44.847568  507510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:49:44.847579  507510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:49:44.847594  507510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:49:44.847605  507510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:49:44.847632  507510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:49:44.865585  507510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36045
	I0116 03:49:44.865597  507510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45289
	I0116 03:49:44.865592  507510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39579
	I0116 03:49:44.866119  507510 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:49:44.866200  507510 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:49:44.866352  507510 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:49:44.867018  507510 main.go:141] libmachine: Using API Version  1
	I0116 03:49:44.867040  507510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:49:44.867043  507510 main.go:141] libmachine: Using API Version  1
	I0116 03:49:44.867051  507510 main.go:141] libmachine: Using API Version  1
	I0116 03:49:44.867071  507510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:49:44.867091  507510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:49:44.867481  507510 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:49:44.867557  507510 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:49:44.867711  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetState
	I0116 03:49:44.867929  507510 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:49:44.868184  507510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:49:44.868215  507510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:49:44.868486  507510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:49:44.868519  507510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:49:44.872747  507510 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-696770"
	W0116 03:49:44.872781  507510 addons.go:243] addon default-storageclass should already be in state true
	I0116 03:49:44.872816  507510 host.go:66] Checking if "old-k8s-version-696770" exists ...
	I0116 03:49:44.873264  507510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:49:44.873308  507510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:49:44.888049  507510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45943
	I0116 03:49:44.890481  507510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37259
	I0116 03:49:44.890990  507510 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:49:44.891285  507510 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:49:44.891567  507510 main.go:141] libmachine: Using API Version  1
	I0116 03:49:44.891582  507510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:49:44.891846  507510 main.go:141] libmachine: Using API Version  1
	I0116 03:49:44.891865  507510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:49:44.892307  507510 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:49:44.892510  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetState
	I0116 03:49:44.892575  507510 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:49:44.892760  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetState
	I0116 03:49:44.894812  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .DriverName
	I0116 03:49:44.895060  507510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37701
	I0116 03:49:44.896571  507510 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0116 03:49:44.895272  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .DriverName
	I0116 03:49:44.895678  507510 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:49:44.898051  507510 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0116 03:49:44.898074  507510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0116 03:49:44.899552  507510 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:49:44.897299  507510 main.go:141] libmachine: Using API Version  1
	I0116 03:49:44.898096  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHHostname
	I0116 03:49:44.901091  507510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:49:44.901216  507510 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 03:49:44.901234  507510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0116 03:49:44.901256  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHHostname
	I0116 03:49:44.902226  507510 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:49:44.902866  507510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:49:44.902908  507510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:49:44.905915  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:49:44.906022  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:49:44.906456  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:20:1a", ip: ""} in network mk-old-k8s-version-696770: {Iface:virbr3 ExpiryTime:2024-01-16 04:43:38 +0000 UTC Type:0 Mac:52:54:00:37:20:1a Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:old-k8s-version-696770 Clientid:01:52:54:00:37:20:1a}
	I0116 03:49:44.906482  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined IP address 192.168.61.167 and MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:49:44.906775  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHPort
	I0116 03:49:44.906851  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:20:1a", ip: ""} in network mk-old-k8s-version-696770: {Iface:virbr3 ExpiryTime:2024-01-16 04:43:38 +0000 UTC Type:0 Mac:52:54:00:37:20:1a Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:old-k8s-version-696770 Clientid:01:52:54:00:37:20:1a}
	I0116 03:49:44.906892  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHPort
	I0116 03:49:44.906941  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined IP address 192.168.61.167 and MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:49:44.907116  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHKeyPath
	I0116 03:49:44.907254  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHKeyPath
	I0116 03:49:44.907324  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHUsername
	I0116 03:49:44.907416  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHUsername
	I0116 03:49:44.907471  507510 sshutil.go:53] new ssh client: &{IP:192.168.61.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/old-k8s-version-696770/id_rsa Username:docker}
	I0116 03:49:44.908078  507510 sshutil.go:53] new ssh client: &{IP:192.168.61.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/old-k8s-version-696770/id_rsa Username:docker}
	I0116 03:49:44.925689  507510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38987
	I0116 03:49:44.926190  507510 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:49:44.926847  507510 main.go:141] libmachine: Using API Version  1
	I0116 03:49:44.926870  507510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:49:44.927322  507510 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:49:44.927545  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetState
	I0116 03:49:44.929553  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .DriverName
	I0116 03:49:44.930008  507510 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0116 03:49:44.930027  507510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0116 03:49:44.930049  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHHostname
	I0116 03:49:44.933353  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:49:44.933768  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:20:1a", ip: ""} in network mk-old-k8s-version-696770: {Iface:virbr3 ExpiryTime:2024-01-16 04:43:38 +0000 UTC Type:0 Mac:52:54:00:37:20:1a Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:old-k8s-version-696770 Clientid:01:52:54:00:37:20:1a}
	I0116 03:49:44.933799  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined IP address 192.168.61.167 and MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:49:44.933975  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHPort
	I0116 03:49:44.934184  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHKeyPath
	I0116 03:49:44.934277  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHUsername
	I0116 03:49:44.934374  507510 sshutil.go:53] new ssh client: &{IP:192.168.61.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/old-k8s-version-696770/id_rsa Username:docker}
	I0116 03:49:45.044743  507510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 03:49:45.073179  507510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0116 03:49:45.073426  507510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0116 03:49:45.095360  507510 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0116 03:49:45.095383  507510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0116 03:49:45.162632  507510 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0116 03:49:45.162661  507510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0116 03:49:45.252628  507510 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 03:49:45.252665  507510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0116 03:49:45.325535  507510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 03:49:45.533499  507510 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-696770" context rescaled to 1 replicas
	I0116 03:49:45.533553  507510 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.167 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 03:49:45.536655  507510 out.go:177] * Verifying Kubernetes components...
	I0116 03:49:41.418664  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:41.918459  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:42.418296  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:42.918119  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:43.418565  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:43.918746  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:44.418812  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:44.918603  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:45.418865  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:45.918104  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:45.538565  507510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:49:46.390448  507510 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.3456663s)
	I0116 03:49:46.390513  507510 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.31729292s)
	I0116 03:49:46.390536  507510 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.317072847s)
	I0116 03:49:46.390556  507510 main.go:141] libmachine: Making call to close driver server
	I0116 03:49:46.390520  507510 main.go:141] libmachine: Making call to close driver server
	I0116 03:49:46.390573  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .Close
	I0116 03:49:46.390595  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .Close
	I0116 03:49:46.390559  507510 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0116 03:49:46.391000  507510 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:49:46.391023  507510 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:49:46.391035  507510 main.go:141] libmachine: Making call to close driver server
	I0116 03:49:46.391040  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | Closing plugin on server side
	I0116 03:49:46.391006  507510 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:49:46.391059  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | Closing plugin on server side
	I0116 03:49:46.391062  507510 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:49:46.391044  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .Close
	I0116 03:49:46.391075  507510 main.go:141] libmachine: Making call to close driver server
	I0116 03:49:46.391083  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .Close
	I0116 03:49:46.391314  507510 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:49:46.391332  507510 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:49:46.391594  507510 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:49:46.391625  507510 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:49:46.465666  507510 main.go:141] libmachine: Making call to close driver server
	I0116 03:49:46.465688  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .Close
	I0116 03:49:46.466107  507510 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:49:46.466127  507510 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:49:46.597926  507510 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.05930194s)
	I0116 03:49:46.597988  507510 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-696770" to be "Ready" ...
	I0116 03:49:46.597925  507510 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.272324444s)
	I0116 03:49:46.598099  507510 main.go:141] libmachine: Making call to close driver server
	I0116 03:49:46.598123  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .Close
	I0116 03:49:46.598503  507510 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:49:46.598527  507510 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:49:46.598531  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | Closing plugin on server side
	I0116 03:49:46.598539  507510 main.go:141] libmachine: Making call to close driver server
	I0116 03:49:46.598549  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .Close
	I0116 03:49:46.598884  507510 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:49:46.598903  507510 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:49:46.598917  507510 addons.go:470] Verifying addon metrics-server=true in "old-k8s-version-696770"
	I0116 03:49:46.600845  507510 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0116 03:49:46.602484  507510 addons.go:505] enable addons completed in 1.755527621s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0116 03:49:46.612929  507510 node_ready.go:49] node "old-k8s-version-696770" has status "Ready":"True"
	I0116 03:49:46.612962  507510 node_ready.go:38] duration metric: took 14.959317ms waiting for node "old-k8s-version-696770" to be "Ready" ...
	I0116 03:49:46.612975  507510 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:49:46.616466  507510 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-h85tj" in "kube-system" namespace to be "Ready" ...
	I0116 03:49:48.628130  507510 pod_ready.go:102] pod "coredns-5644d7b6d9-h85tj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:49:46.418268  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:46.917976  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:47.418645  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:47.917927  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:48.417920  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:48.917939  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:49.418387  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:49.918203  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:50.417930  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:50.918518  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:51.418036  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:51.917981  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:52.418293  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:52.635961  507257 kubeadm.go:1088] duration metric: took 12.176857981s to wait for elevateKubeSystemPrivileges.
	I0116 03:49:52.636014  507257 kubeadm.go:406] StartCluster complete in 5m10.892359223s
	I0116 03:49:52.636054  507257 settings.go:142] acquiring lock: {Name:mk3ded99c06d285b174e76622bb0741c8dd2d8fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:49:52.636186  507257 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17965-468241/kubeconfig
	I0116 03:49:52.638885  507257 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-468241/kubeconfig: {Name:mkac3c63bbcba9b4fa1bea3480797757d703fb9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:49:52.639229  507257 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0116 03:49:52.639345  507257 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0116 03:49:52.639439  507257 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-615980"
	I0116 03:49:52.639461  507257 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-615980"
	I0116 03:49:52.639458  507257 addons.go:69] Setting default-storageclass=true in profile "embed-certs-615980"
	W0116 03:49:52.639469  507257 addons.go:243] addon storage-provisioner should already be in state true
	I0116 03:49:52.639482  507257 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-615980"
	I0116 03:49:52.639504  507257 config.go:182] Loaded profile config "embed-certs-615980": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 03:49:52.639541  507257 host.go:66] Checking if "embed-certs-615980" exists ...
	I0116 03:49:52.639562  507257 addons.go:69] Setting metrics-server=true in profile "embed-certs-615980"
	I0116 03:49:52.639579  507257 addons.go:234] Setting addon metrics-server=true in "embed-certs-615980"
	W0116 03:49:52.639591  507257 addons.go:243] addon metrics-server should already be in state true
	I0116 03:49:52.639639  507257 host.go:66] Checking if "embed-certs-615980" exists ...
	I0116 03:49:52.639965  507257 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:49:52.639984  507257 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:49:52.640007  507257 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:49:52.640023  507257 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:49:52.640084  507257 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:49:52.640118  507257 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:49:52.660468  507257 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36595
	I0116 03:49:52.660653  507257 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44367
	I0116 03:49:52.661058  507257 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:49:52.661184  507257 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:49:52.661685  507257 main.go:141] libmachine: Using API Version  1
	I0116 03:49:52.661709  507257 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:49:52.661768  507257 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40717
	I0116 03:49:52.661855  507257 main.go:141] libmachine: Using API Version  1
	I0116 03:49:52.661871  507257 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:49:52.662141  507257 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:49:52.662207  507257 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:49:52.662425  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetState
	I0116 03:49:52.662480  507257 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:49:52.662858  507257 main.go:141] libmachine: Using API Version  1
	I0116 03:49:52.662875  507257 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:49:52.663301  507257 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:49:52.663337  507257 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:49:52.663413  507257 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:49:52.663956  507257 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:49:52.663985  507257 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:49:52.666163  507257 addons.go:234] Setting addon default-storageclass=true in "embed-certs-615980"
	W0116 03:49:52.666190  507257 addons.go:243] addon default-storageclass should already be in state true
	I0116 03:49:52.666224  507257 host.go:66] Checking if "embed-certs-615980" exists ...
	I0116 03:49:52.666630  507257 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:49:52.666672  507257 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:49:52.682228  507257 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37799
	I0116 03:49:52.682743  507257 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:49:52.683402  507257 main.go:141] libmachine: Using API Version  1
	I0116 03:49:52.683425  507257 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:49:52.683719  507257 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36773
	I0116 03:49:52.683893  507257 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:49:52.684125  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetState
	I0116 03:49:52.684589  507257 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:49:52.685108  507257 main.go:141] libmachine: Using API Version  1
	I0116 03:49:52.685128  507257 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:49:52.685607  507257 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:49:52.685627  507257 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42767
	I0116 03:49:52.686073  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetState
	I0116 03:49:52.686329  507257 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:49:52.686781  507257 main.go:141] libmachine: Using API Version  1
	I0116 03:49:52.686804  507257 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:49:52.687167  507257 main.go:141] libmachine: (embed-certs-615980) Calling .DriverName
	I0116 03:49:52.687213  507257 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:49:52.689840  507257 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:49:52.687751  507257 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:49:52.689319  507257 main.go:141] libmachine: (embed-certs-615980) Calling .DriverName
	I0116 03:49:52.691584  507257 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 03:49:52.691595  507257 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0116 03:49:52.691610  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHHostname
	I0116 03:49:52.691655  507257 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:49:52.693170  507257 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0116 03:49:52.694465  507257 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0116 03:49:52.694478  507257 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0116 03:49:52.694495  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHHostname
	I0116 03:49:52.705398  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:49:52.705440  507257 main.go:141] libmachine: (embed-certs-615980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:a6:40", ip: ""} in network mk-embed-certs-615980: {Iface:virbr4 ExpiryTime:2024-01-16 04:44:24 +0000 UTC Type:0 Mac:52:54:00:d4:a6:40 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-615980 Clientid:01:52:54:00:d4:a6:40}
	I0116 03:49:52.705440  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:49:52.705469  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHPort
	I0116 03:49:52.705475  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined IP address 192.168.72.159 and MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:49:52.705501  507257 main.go:141] libmachine: (embed-certs-615980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:a6:40", ip: ""} in network mk-embed-certs-615980: {Iface:virbr4 ExpiryTime:2024-01-16 04:44:24 +0000 UTC Type:0 Mac:52:54:00:d4:a6:40 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-615980 Clientid:01:52:54:00:d4:a6:40}
	I0116 03:49:52.705516  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined IP address 192.168.72.159 and MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:49:52.705403  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHPort
	I0116 03:49:52.705751  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHKeyPath
	I0116 03:49:52.705813  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHKeyPath
	I0116 03:49:52.705956  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHUsername
	I0116 03:49:52.706078  507257 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/embed-certs-615980/id_rsa Username:docker}
	I0116 03:49:52.706839  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHUsername
	I0116 03:49:52.707045  507257 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/embed-certs-615980/id_rsa Username:docker}
	I0116 03:49:52.713247  507257 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33775
	I0116 03:49:52.714047  507257 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:49:52.714725  507257 main.go:141] libmachine: Using API Version  1
	I0116 03:49:52.714742  507257 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:49:52.715212  507257 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:49:52.715442  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetState
	I0116 03:49:52.717568  507257 main.go:141] libmachine: (embed-certs-615980) Calling .DriverName
	I0116 03:49:52.717813  507257 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0116 03:49:52.717824  507257 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0116 03:49:52.717839  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHHostname
	I0116 03:49:52.720720  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:49:52.721189  507257 main.go:141] libmachine: (embed-certs-615980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:a6:40", ip: ""} in network mk-embed-certs-615980: {Iface:virbr4 ExpiryTime:2024-01-16 04:44:24 +0000 UTC Type:0 Mac:52:54:00:d4:a6:40 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-615980 Clientid:01:52:54:00:d4:a6:40}
	I0116 03:49:52.721205  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined IP address 192.168.72.159 and MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:49:52.721414  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHPort
	I0116 03:49:52.721573  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHKeyPath
	I0116 03:49:52.721724  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHUsername
	I0116 03:49:52.721814  507257 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/embed-certs-615980/id_rsa Username:docker}
	I0116 03:49:52.899474  507257 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0116 03:49:52.971597  507257 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0116 03:49:52.971623  507257 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0116 03:49:52.971955  507257 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 03:49:53.029724  507257 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0116 03:49:53.051410  507257 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0116 03:49:53.051439  507257 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0116 03:49:53.121058  507257 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 03:49:53.121088  507257 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0116 03:49:53.179049  507257 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-615980" context rescaled to 1 replicas
	I0116 03:49:53.179098  507257 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.159 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 03:49:53.181191  507257 out.go:177] * Verifying Kubernetes components...
	I0116 03:49:50.633148  507510 pod_ready.go:92] pod "coredns-5644d7b6d9-h85tj" in "kube-system" namespace has status "Ready":"True"
	I0116 03:49:50.633179  507510 pod_ready.go:81] duration metric: took 4.016682348s waiting for pod "coredns-5644d7b6d9-h85tj" in "kube-system" namespace to be "Ready" ...
	I0116 03:49:50.633194  507510 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rc8xt" in "kube-system" namespace to be "Ready" ...
	I0116 03:49:50.648707  507510 pod_ready.go:92] pod "kube-proxy-rc8xt" in "kube-system" namespace has status "Ready":"True"
	I0116 03:49:50.648737  507510 pod_ready.go:81] duration metric: took 15.535257ms waiting for pod "kube-proxy-rc8xt" in "kube-system" namespace to be "Ready" ...
	I0116 03:49:50.648752  507510 pod_ready.go:38] duration metric: took 4.035762868s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:49:50.648770  507510 api_server.go:52] waiting for apiserver process to appear ...
	I0116 03:49:50.648842  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:49:50.665917  507510 api_server.go:72] duration metric: took 5.1323051s to wait for apiserver process to appear ...
	I0116 03:49:50.665954  507510 api_server.go:88] waiting for apiserver healthz status ...
	I0116 03:49:50.665982  507510 api_server.go:253] Checking apiserver healthz at https://192.168.61.167:8443/healthz ...
	I0116 03:49:50.672790  507510 api_server.go:279] https://192.168.61.167:8443/healthz returned 200:
	ok
	I0116 03:49:50.674024  507510 api_server.go:141] control plane version: v1.16.0
	I0116 03:49:50.674059  507510 api_server.go:131] duration metric: took 8.096153ms to wait for apiserver health ...
	I0116 03:49:50.674071  507510 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 03:49:50.677835  507510 system_pods.go:59] 4 kube-system pods found
	I0116 03:49:50.677871  507510 system_pods.go:61] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:49:50.677878  507510 system_pods.go:61] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:49:50.677887  507510 system_pods.go:61] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:49:50.677894  507510 system_pods.go:61] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:49:50.677905  507510 system_pods.go:74] duration metric: took 3.826308ms to wait for pod list to return data ...
	I0116 03:49:50.677914  507510 default_sa.go:34] waiting for default service account to be created ...
	I0116 03:49:50.680932  507510 default_sa.go:45] found service account: "default"
	I0116 03:49:50.680964  507510 default_sa.go:55] duration metric: took 3.041693ms for default service account to be created ...
	I0116 03:49:50.680975  507510 system_pods.go:116] waiting for k8s-apps to be running ...
	I0116 03:49:50.684730  507510 system_pods.go:86] 4 kube-system pods found
	I0116 03:49:50.684759  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:49:50.684767  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:49:50.684778  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:49:50.684785  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:49:50.684811  507510 retry.go:31] will retry after 238.551043ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:49:50.928725  507510 system_pods.go:86] 4 kube-system pods found
	I0116 03:49:50.928761  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:49:50.928768  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:49:50.928779  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:49:50.928786  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:49:50.928816  507510 retry.go:31] will retry after 246.771125ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:49:51.180688  507510 system_pods.go:86] 4 kube-system pods found
	I0116 03:49:51.180727  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:49:51.180736  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:49:51.180747  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:49:51.180755  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:49:51.180780  507510 retry.go:31] will retry after 439.966453ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:49:51.625927  507510 system_pods.go:86] 4 kube-system pods found
	I0116 03:49:51.625958  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:49:51.625964  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:49:51.625970  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:49:51.625975  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:49:51.626001  507510 retry.go:31] will retry after 403.213781ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:49:52.035928  507510 system_pods.go:86] 4 kube-system pods found
	I0116 03:49:52.035994  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:49:52.036003  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:49:52.036014  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:49:52.036022  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:49:52.036064  507510 retry.go:31] will retry after 501.701933ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:49:52.543834  507510 system_pods.go:86] 4 kube-system pods found
	I0116 03:49:52.543874  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:49:52.543883  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:49:52.543894  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:49:52.543904  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:49:52.543929  507510 retry.go:31] will retry after 898.357774ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:49:53.447323  507510 system_pods.go:86] 4 kube-system pods found
	I0116 03:49:53.447356  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:49:53.447364  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:49:53.447373  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:49:53.447382  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:49:53.447405  507510 retry.go:31] will retry after 928.816907ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:49:54.382017  507510 system_pods.go:86] 4 kube-system pods found
	I0116 03:49:54.382046  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:49:54.382052  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:49:54.382058  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:49:54.382065  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:49:54.382085  507510 retry.go:31] will retry after 935.220919ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:49:53.183129  507257 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:49:53.296441  507257 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 03:49:55.162183  507257 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.262649875s)
	I0116 03:49:55.162237  507257 start.go:929] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0116 03:49:55.516930  507257 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.544937669s)
	I0116 03:49:55.516988  507257 main.go:141] libmachine: Making call to close driver server
	I0116 03:49:55.517002  507257 main.go:141] libmachine: (embed-certs-615980) Calling .Close
	I0116 03:49:55.517046  507257 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.487276988s)
	I0116 03:49:55.517101  507257 main.go:141] libmachine: Making call to close driver server
	I0116 03:49:55.517108  507257 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.333941337s)
	I0116 03:49:55.517114  507257 main.go:141] libmachine: (embed-certs-615980) Calling .Close
	I0116 03:49:55.517135  507257 node_ready.go:35] waiting up to 6m0s for node "embed-certs-615980" to be "Ready" ...
	I0116 03:49:55.517496  507257 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:49:55.517496  507257 main.go:141] libmachine: (embed-certs-615980) DBG | Closing plugin on server side
	I0116 03:49:55.517512  507257 main.go:141] libmachine: (embed-certs-615980) DBG | Closing plugin on server side
	I0116 03:49:55.517520  507257 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:49:55.517535  507257 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:49:55.517546  507257 main.go:141] libmachine: Making call to close driver server
	I0116 03:49:55.517548  507257 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:49:55.517559  507257 main.go:141] libmachine: (embed-certs-615980) Calling .Close
	I0116 03:49:55.517566  507257 main.go:141] libmachine: Making call to close driver server
	I0116 03:49:55.517577  507257 main.go:141] libmachine: (embed-certs-615980) Calling .Close
	I0116 03:49:55.517902  507257 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:49:55.517916  507257 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:49:55.517920  507257 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:49:55.517926  507257 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:49:55.537242  507257 node_ready.go:49] node "embed-certs-615980" has status "Ready":"True"
	I0116 03:49:55.537273  507257 node_ready.go:38] duration metric: took 20.119969ms waiting for node "embed-certs-615980" to be "Ready" ...
	I0116 03:49:55.537282  507257 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:49:55.567823  507257 main.go:141] libmachine: Making call to close driver server
	I0116 03:49:55.567859  507257 main.go:141] libmachine: (embed-certs-615980) Calling .Close
	I0116 03:49:55.568264  507257 main.go:141] libmachine: (embed-certs-615980) DBG | Closing plugin on server side
	I0116 03:49:55.568301  507257 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:49:55.568324  507257 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:49:55.571667  507257 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-hxsvz" in "kube-system" namespace to be "Ready" ...
	I0116 03:49:55.962821  507257 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.666330022s)
	I0116 03:49:55.962896  507257 main.go:141] libmachine: Making call to close driver server
	I0116 03:49:55.962915  507257 main.go:141] libmachine: (embed-certs-615980) Calling .Close
	I0116 03:49:55.963282  507257 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:49:55.963304  507257 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:49:55.963317  507257 main.go:141] libmachine: Making call to close driver server
	I0116 03:49:55.963328  507257 main.go:141] libmachine: (embed-certs-615980) Calling .Close
	I0116 03:49:55.964155  507257 main.go:141] libmachine: (embed-certs-615980) DBG | Closing plugin on server side
	I0116 03:49:55.964178  507257 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:49:55.964190  507257 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:49:55.964209  507257 addons.go:470] Verifying addon metrics-server=true in "embed-certs-615980"
	I0116 03:49:55.967489  507257 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0116 03:49:55.969099  507257 addons.go:505] enable addons completed in 3.329750862s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0116 03:49:57.085999  507257 pod_ready.go:92] pod "coredns-5dd5756b68-hxsvz" in "kube-system" namespace has status "Ready":"True"
	I0116 03:49:57.086034  507257 pod_ready.go:81] duration metric: took 1.514340062s waiting for pod "coredns-5dd5756b68-hxsvz" in "kube-system" namespace to be "Ready" ...
	I0116 03:49:57.086048  507257 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-twbhh" in "kube-system" namespace to be "Ready" ...
	I0116 03:49:57.110886  507257 pod_ready.go:92] pod "coredns-5dd5756b68-twbhh" in "kube-system" namespace has status "Ready":"True"
	I0116 03:49:57.110920  507257 pod_ready.go:81] duration metric: took 24.862165ms waiting for pod "coredns-5dd5756b68-twbhh" in "kube-system" namespace to be "Ready" ...
	I0116 03:49:57.110934  507257 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-615980" in "kube-system" namespace to be "Ready" ...
	I0116 03:49:57.122556  507257 pod_ready.go:92] pod "etcd-embed-certs-615980" in "kube-system" namespace has status "Ready":"True"
	I0116 03:49:57.122588  507257 pod_ready.go:81] duration metric: took 11.643561ms waiting for pod "etcd-embed-certs-615980" in "kube-system" namespace to be "Ready" ...
	I0116 03:49:57.122601  507257 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-615980" in "kube-system" namespace to be "Ready" ...
	I0116 03:49:57.134402  507257 pod_ready.go:92] pod "kube-apiserver-embed-certs-615980" in "kube-system" namespace has status "Ready":"True"
	I0116 03:49:57.134432  507257 pod_ready.go:81] duration metric: took 11.823016ms waiting for pod "kube-apiserver-embed-certs-615980" in "kube-system" namespace to be "Ready" ...
	I0116 03:49:57.134442  507257 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-615980" in "kube-system" namespace to be "Ready" ...
	I0116 03:49:57.152947  507257 pod_ready.go:92] pod "kube-controller-manager-embed-certs-615980" in "kube-system" namespace has status "Ready":"True"
	I0116 03:49:57.152984  507257 pod_ready.go:81] duration metric: took 18.533642ms waiting for pod "kube-controller-manager-embed-certs-615980" in "kube-system" namespace to be "Ready" ...
	I0116 03:49:57.153000  507257 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8rkb5" in "kube-system" namespace to be "Ready" ...
	I0116 03:49:57.921983  507257 pod_ready.go:92] pod "kube-proxy-8rkb5" in "kube-system" namespace has status "Ready":"True"
	I0116 03:49:57.922016  507257 pod_ready.go:81] duration metric: took 769.007434ms waiting for pod "kube-proxy-8rkb5" in "kube-system" namespace to be "Ready" ...
	I0116 03:49:57.922028  507257 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-615980" in "kube-system" namespace to be "Ready" ...
	I0116 03:49:58.322237  507257 pod_ready.go:92] pod "kube-scheduler-embed-certs-615980" in "kube-system" namespace has status "Ready":"True"
	I0116 03:49:58.322267  507257 pod_ready.go:81] duration metric: took 400.23243ms waiting for pod "kube-scheduler-embed-certs-615980" in "kube-system" namespace to be "Ready" ...
	I0116 03:49:58.322280  507257 pod_ready.go:38] duration metric: took 2.78498776s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:49:58.322295  507257 api_server.go:52] waiting for apiserver process to appear ...
	I0116 03:49:58.322357  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:49:58.338527  507257 api_server.go:72] duration metric: took 5.159388866s to wait for apiserver process to appear ...
	I0116 03:49:58.338553  507257 api_server.go:88] waiting for apiserver healthz status ...
	I0116 03:49:58.338575  507257 api_server.go:253] Checking apiserver healthz at https://192.168.72.159:8443/healthz ...
	I0116 03:49:58.345758  507257 api_server.go:279] https://192.168.72.159:8443/healthz returned 200:
	ok
	I0116 03:49:58.347531  507257 api_server.go:141] control plane version: v1.28.4
	I0116 03:49:58.347559  507257 api_server.go:131] duration metric: took 8.999388ms to wait for apiserver health ...
	I0116 03:49:58.347573  507257 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 03:49:58.527633  507257 system_pods.go:59] 9 kube-system pods found
	I0116 03:49:58.527676  507257 system_pods.go:61] "coredns-5dd5756b68-hxsvz" [de7da02c-649b-4d29-8a89-5642105b6049] Running
	I0116 03:49:58.527685  507257 system_pods.go:61] "coredns-5dd5756b68-twbhh" [9be49c16-f213-47da-83f4-90fc392eb49e] Running
	I0116 03:49:58.527692  507257 system_pods.go:61] "etcd-embed-certs-615980" [2098148f-0cac-48ce-a607-381b13334438] Running
	I0116 03:49:58.527704  507257 system_pods.go:61] "kube-apiserver-embed-certs-615980" [3d49b47b-da34-4f4d-a8d3-758c0d28c034] Running
	I0116 03:49:58.527711  507257 system_pods.go:61] "kube-controller-manager-embed-certs-615980" [c4f7946d-907d-42ad-8e84-8fa337111688] Running
	I0116 03:49:58.527718  507257 system_pods.go:61] "kube-proxy-8rkb5" [322fae38-3b29-4135-ba3f-c0ff8bda1e4a] Running
	I0116 03:49:58.527725  507257 system_pods.go:61] "kube-scheduler-embed-certs-615980" [882f322f-8686-40a4-a613-e9855ccfb56e] Running
	I0116 03:49:58.527736  507257 system_pods.go:61] "metrics-server-57f55c9bc5-fc7tx" [14a38c13-7a9e-4548-9654-c568ede29e0f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:49:58.527748  507257 system_pods.go:61] "storage-provisioner" [1ce752ad-ce91-462e-ab2b-2af64064eb40] Running
	I0116 03:49:58.527757  507257 system_pods.go:74] duration metric: took 180.177482ms to wait for pod list to return data ...
	I0116 03:49:58.527771  507257 default_sa.go:34] waiting for default service account to be created ...
	I0116 03:49:58.721717  507257 default_sa.go:45] found service account: "default"
	I0116 03:49:58.721749  507257 default_sa.go:55] duration metric: took 193.967755ms for default service account to be created ...
	I0116 03:49:58.721758  507257 system_pods.go:116] waiting for k8s-apps to be running ...
	I0116 03:49:58.925915  507257 system_pods.go:86] 9 kube-system pods found
	I0116 03:49:58.925957  507257 system_pods.go:89] "coredns-5dd5756b68-hxsvz" [de7da02c-649b-4d29-8a89-5642105b6049] Running
	I0116 03:49:58.925964  507257 system_pods.go:89] "coredns-5dd5756b68-twbhh" [9be49c16-f213-47da-83f4-90fc392eb49e] Running
	I0116 03:49:58.925970  507257 system_pods.go:89] "etcd-embed-certs-615980" [2098148f-0cac-48ce-a607-381b13334438] Running
	I0116 03:49:58.925977  507257 system_pods.go:89] "kube-apiserver-embed-certs-615980" [3d49b47b-da34-4f4d-a8d3-758c0d28c034] Running
	I0116 03:49:58.925987  507257 system_pods.go:89] "kube-controller-manager-embed-certs-615980" [c4f7946d-907d-42ad-8e84-8fa337111688] Running
	I0116 03:49:58.925994  507257 system_pods.go:89] "kube-proxy-8rkb5" [322fae38-3b29-4135-ba3f-c0ff8bda1e4a] Running
	I0116 03:49:58.926040  507257 system_pods.go:89] "kube-scheduler-embed-certs-615980" [882f322f-8686-40a4-a613-e9855ccfb56e] Running
	I0116 03:49:58.926063  507257 system_pods.go:89] "metrics-server-57f55c9bc5-fc7tx" [14a38c13-7a9e-4548-9654-c568ede29e0f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:49:58.926070  507257 system_pods.go:89] "storage-provisioner" [1ce752ad-ce91-462e-ab2b-2af64064eb40] Running
	I0116 03:49:58.926087  507257 system_pods.go:126] duration metric: took 204.321811ms to wait for k8s-apps to be running ...
	I0116 03:49:58.926099  507257 system_svc.go:44] waiting for kubelet service to be running ....
	I0116 03:49:58.926159  507257 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:49:58.940982  507257 system_svc.go:56] duration metric: took 14.86844ms WaitForService to wait for kubelet.
	I0116 03:49:58.941019  507257 kubeadm.go:581] duration metric: took 5.761889406s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0116 03:49:58.941051  507257 node_conditions.go:102] verifying NodePressure condition ...
	I0116 03:49:59.121649  507257 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 03:49:59.121681  507257 node_conditions.go:123] node cpu capacity is 2
	I0116 03:49:59.121694  507257 node_conditions.go:105] duration metric: took 180.636851ms to run NodePressure ...
	I0116 03:49:59.121707  507257 start.go:228] waiting for startup goroutines ...
	I0116 03:49:59.121717  507257 start.go:233] waiting for cluster config update ...
	I0116 03:49:59.121730  507257 start.go:242] writing updated cluster config ...
	I0116 03:49:59.122058  507257 ssh_runner.go:195] Run: rm -f paused
	I0116 03:49:59.177472  507257 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0116 03:49:59.179801  507257 out.go:177] * Done! kubectl is now configured to use "embed-certs-615980" cluster and "default" namespace by default
	I0116 03:49:55.324439  507510 system_pods.go:86] 4 kube-system pods found
	I0116 03:49:55.324471  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:49:55.324477  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:49:55.324484  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:49:55.324489  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:49:55.324509  507510 retry.go:31] will retry after 1.168298317s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:49:56.500050  507510 system_pods.go:86] 4 kube-system pods found
	I0116 03:49:56.500090  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:49:56.500098  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:49:56.500111  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:49:56.500118  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:49:56.500142  507510 retry.go:31] will retry after 1.453657977s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:49:57.961220  507510 system_pods.go:86] 4 kube-system pods found
	I0116 03:49:57.961248  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:49:57.961254  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:49:57.961261  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:49:57.961266  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:49:57.961286  507510 retry.go:31] will retry after 1.763969687s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:49:59.731086  507510 system_pods.go:86] 4 kube-system pods found
	I0116 03:49:59.731112  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:49:59.731117  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:49:59.731123  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:49:59.731129  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:49:59.731147  507510 retry.go:31] will retry after 3.185395035s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:50:02.922897  507510 system_pods.go:86] 4 kube-system pods found
	I0116 03:50:02.922934  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:50:02.922944  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:50:02.922954  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:50:02.922961  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:50:02.922985  507510 retry.go:31] will retry after 4.049428323s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:50:06.978002  507510 system_pods.go:86] 4 kube-system pods found
	I0116 03:50:06.978029  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:50:06.978034  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:50:06.978040  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:50:06.978045  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:50:06.978063  507510 retry.go:31] will retry after 4.626513574s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:50:11.610464  507510 system_pods.go:86] 4 kube-system pods found
	I0116 03:50:11.610499  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:50:11.610507  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:50:11.610517  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:50:11.610524  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:50:11.610550  507510 retry.go:31] will retry after 4.683195792s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:50:16.298843  507510 system_pods.go:86] 4 kube-system pods found
	I0116 03:50:16.298873  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:50:16.298879  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:50:16.298888  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:50:16.298892  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:50:16.298913  507510 retry.go:31] will retry after 8.214175219s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:50:24.520982  507510 system_pods.go:86] 5 kube-system pods found
	I0116 03:50:24.521020  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:50:24.521029  507510 system_pods.go:89] "etcd-old-k8s-version-696770" [255a0b2a-df41-4621-8918-36e1b0c25c24] Pending
	I0116 03:50:24.521033  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:50:24.521040  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:50:24.521045  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:50:24.521067  507510 retry.go:31] will retry after 9.626598035s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:50:34.155753  507510 system_pods.go:86] 5 kube-system pods found
	I0116 03:50:34.155790  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:50:34.155798  507510 system_pods.go:89] "etcd-old-k8s-version-696770" [255a0b2a-df41-4621-8918-36e1b0c25c24] Running
	I0116 03:50:34.155805  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:50:34.155815  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:50:34.155822  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:50:34.155849  507510 retry.go:31] will retry after 13.760629262s: missing components: kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:50:47.923537  507510 system_pods.go:86] 7 kube-system pods found
	I0116 03:50:47.923571  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:50:47.923577  507510 system_pods.go:89] "etcd-old-k8s-version-696770" [255a0b2a-df41-4621-8918-36e1b0c25c24] Running
	I0116 03:50:47.923582  507510 system_pods.go:89] "kube-apiserver-old-k8s-version-696770" [c682b257-d00b-4b4c-8089-cda1b9da538c] Running
	I0116 03:50:47.923585  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:50:47.923589  507510 system_pods.go:89] "kube-scheduler-old-k8s-version-696770" [af271425-aec7-45d9-97c5-9a033f13a41e] Running
	I0116 03:50:47.923599  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:50:47.923603  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:50:47.923621  507510 retry.go:31] will retry after 15.810378345s: missing components: kube-controller-manager
	I0116 03:51:03.742786  507510 system_pods.go:86] 8 kube-system pods found
	I0116 03:51:03.742819  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:51:03.742825  507510 system_pods.go:89] "etcd-old-k8s-version-696770" [255a0b2a-df41-4621-8918-36e1b0c25c24] Running
	I0116 03:51:03.742830  507510 system_pods.go:89] "kube-apiserver-old-k8s-version-696770" [c682b257-d00b-4b4c-8089-cda1b9da538c] Running
	I0116 03:51:03.742835  507510 system_pods.go:89] "kube-controller-manager-old-k8s-version-696770" [87b5ef82-182e-458d-b521-05a36d3d031b] Running
	I0116 03:51:03.742838  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:51:03.742842  507510 system_pods.go:89] "kube-scheduler-old-k8s-version-696770" [af271425-aec7-45d9-97c5-9a033f13a41e] Running
	I0116 03:51:03.742849  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:51:03.742854  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:51:03.742865  507510 system_pods.go:126] duration metric: took 1m13.061883389s to wait for k8s-apps to be running ...
	I0116 03:51:03.742872  507510 system_svc.go:44] waiting for kubelet service to be running ....
	I0116 03:51:03.742921  507510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:51:03.761399  507510 system_svc.go:56] duration metric: took 18.514586ms WaitForService to wait for kubelet.
	I0116 03:51:03.761433  507510 kubeadm.go:581] duration metric: took 1m18.22783177s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0116 03:51:03.761461  507510 node_conditions.go:102] verifying NodePressure condition ...
	I0116 03:51:03.765716  507510 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 03:51:03.765760  507510 node_conditions.go:123] node cpu capacity is 2
	I0116 03:51:03.765777  507510 node_conditions.go:105] duration metric: took 4.309124ms to run NodePressure ...
	I0116 03:51:03.765794  507510 start.go:228] waiting for startup goroutines ...
	I0116 03:51:03.765803  507510 start.go:233] waiting for cluster config update ...
	I0116 03:51:03.765817  507510 start.go:242] writing updated cluster config ...
	I0116 03:51:03.766160  507510 ssh_runner.go:195] Run: rm -f paused
	I0116 03:51:03.822502  507510 start.go:600] kubectl: 1.29.0, cluster: 1.16.0 (minor skew: 13)
	I0116 03:51:03.824687  507510 out.go:177] 
	W0116 03:51:03.826162  507510 out.go:239] ! /usr/local/bin/kubectl is version 1.29.0, which may have incompatibilities with Kubernetes 1.16.0.
	I0116 03:51:03.827659  507510 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0116 03:51:03.829229  507510 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-696770" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Tue 2024-01-16 03:44:23 UTC, ends at Tue 2024-01-16 03:59:01 UTC. --
	Jan 16 03:59:01 embed-certs-615980 crio[707]: time="2024-01-16 03:59:01.071235591Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:633707032a417c16dd3e1b01a25542bfafa7810c357c32cd6ecbabd906f016f4,Verbose:false,}" file="go-grpc-middleware/chain.go:25" id=ba74a44d-949d-4650-b3d9-39ebcfe9496f name=/runtime.v1.RuntimeService/ContainerStatus
	Jan 16 03:59:01 embed-certs-615980 crio[707]: time="2024-01-16 03:59:01.071386963Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:633707032a417c16dd3e1b01a25542bfafa7810c357c32cd6ecbabd906f016f4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1705376972419234660,StartedAt:1705376973629760733,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-apiserver:v1.28.4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-615980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0fd9681c69dd674b431c80253c522fa,},Annotations:map[string]string{io.kubernetes.container.hash: 8d1f459a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termi
nation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/e0fd9681c69dd674b431c80253c522fa/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/e0fd9681c69dd674b431c80253c522fa/containers/kube-apiserver/ca13ca9f,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},},LogPath:/var/log/pods/kube-system_kube-apiserver-embed-certs-615980_e0fd9681c
69dd674b431c80253c522fa/kube-apiserver/2.log,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=ba74a44d-949d-4650-b3d9-39ebcfe9496f name=/runtime.v1.RuntimeService/ContainerStatus
	Jan 16 03:59:01 embed-certs-615980 crio[707]: time="2024-01-16 03:59:01.071948660Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:d1fd9f0e356a8ef8c346cb8ec5851bbf72b404e7c440c94d2bef669a2056a16e,Verbose:false,}" file="go-grpc-middleware/chain.go:25" id=db1c18fe-7971-461e-8d7a-cf3513e380a8 name=/runtime.v1.RuntimeService/ContainerStatus
	Jan 16 03:59:01 embed-certs-615980 crio[707]: time="2024-01-16 03:59:01.072031270Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:d1fd9f0e356a8ef8c346cb8ec5851bbf72b404e7c440c94d2bef669a2056a16e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1705376972219209937,StartedAt:1705376973400994535,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/etcd:3.5.9-0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-615980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63ef129f8bda6acc00eb7303140250b9,},Annotations:map[string]string{io.kubernetes.container.hash: 34e96305,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMess
agePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/63ef129f8bda6acc00eb7303140250b9/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/63ef129f8bda6acc00eb7303140250b9/containers/etcd/c0673bfa,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/var/lib/minikube/etcd,HostPath:/var/lib/minikube/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/var/lib/minikube/certs/etcd,HostPath:/var/lib/minikube/certs/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},},LogPath:/var/log/pods/kube-system_etcd-embed-certs-615980_63ef129f8bda6acc00eb7303140250b9/etcd/2.log,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=db1c18fe-7971-461e-8d7a-cf3513e380a8 name=/runtime.v1.RuntimeService/ContainerStatus
	Jan 16 03:59:01 embed-certs-615980 crio[707]: time="2024-01-16 03:59:01.095937364Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=9df43bee-ab30-4dff-900b-ddb81d0436f9 name=/runtime.v1.RuntimeService/Version
	Jan 16 03:59:01 embed-certs-615980 crio[707]: time="2024-01-16 03:59:01.096056947Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=9df43bee-ab30-4dff-900b-ddb81d0436f9 name=/runtime.v1.RuntimeService/Version
	Jan 16 03:59:01 embed-certs-615980 crio[707]: time="2024-01-16 03:59:01.098001410Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=ce8a48c1-6430-4534-9000-92cc4bc2d978 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 03:59:01 embed-certs-615980 crio[707]: time="2024-01-16 03:59:01.098431785Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705377541098416836,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=ce8a48c1-6430-4534-9000-92cc4bc2d978 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 03:59:01 embed-certs-615980 crio[707]: time="2024-01-16 03:59:01.098967536Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=0a310626-b0cb-4fd2-9b1a-8b71fae903cc name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:59:01 embed-certs-615980 crio[707]: time="2024-01-16 03:59:01.099023242Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=0a310626-b0cb-4fd2-9b1a-8b71fae903cc name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:59:01 embed-certs-615980 crio[707]: time="2024-01-16 03:59:01.099265718Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4cdf5e29ffaf0a16aac76782f118de80dd83ad4f7f8c86a00d36f2ca5059e03a,PodSandboxId:27e41ebc0cb158e4c4164f57a67968a4552ce3de1a9cc31a92e74ca580f7667d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705376996954006586,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ce752ad-ce91-462e-ab2b-2af64064eb40,},Annotations:map[string]string{io.kubernetes.container.hash: b9c2ee9a,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e486c660ebfdacd38f1ded1e9f9da19c21269bfec4834fd31aaaf2b6fe8677ca,PodSandboxId:c494e5883ed7f5b4cb9a5a65eea751f339dec149901ac7c06bc62272a1ae106a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705376996364176685,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8rkb5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 322fae38-3b29-4135-ba3f-c0ff8bda1e4a,},Annotations:map[string]string{io.kubernetes.container.hash: 66c29954,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c27b0553e4da0cf3906a0e8a9b50bf1f87dd0217e88a38218aebd11ea0de03fa,PodSandboxId:56da411a5eabd0d5daab31408ff9eee20050a8d5bf8f8b838bf543c9672ae3aa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705376995402642223,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-twbhh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9be49c16-f213-47da-83f4-90fc392eb49e,},Annotations:map[string]string{io.kubernetes.container.hash: 1f34606d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5ec49f0949e9c9671965ab44a04e11962756e944a0ae610596b3e8e8d214341,PodSandboxId:673e669acde408f6a431fa744e47eaf784b377c5b9395afa768ce18832f581c0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705376972471622013,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-615980,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: b185a766c563f6ce9043c8eda28f0d32,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:994d09ee15ce2df74bf9fd5ab55ee26cac0ce20a59cd56abc045ed57a6b95028,PodSandboxId:1ade641d0c13ab07984a9f499bd6af500af15c8a0f383e63a68e05e678e168f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705376972283696718,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-615980,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 9b9f0a8323872d7b759609d60ab95333,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:633707032a417c16dd3e1b01a25542bfafa7810c357c32cd6ecbabd906f016f4,PodSandboxId:d4251834ab94299dacdfaa61339efb08d308b1af1532f243d33472a613672211,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705376972331780188,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-615980,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: e0fd9681c69dd674b431c80253c522fa,},Annotations:map[string]string{io.kubernetes.container.hash: 8d1f459a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1fd9f0e356a8ef8c346cb8ec5851bbf72b404e7c440c94d2bef669a2056a16e,PodSandboxId:81a408a8901fc14eeaf95fd8236b20fe38b27dc4ba6d263626eee3a6d26a0149,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705376971984200087,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-615980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63ef129f8bda6acc00eb7303140250b
9,},Annotations:map[string]string{io.kubernetes.container.hash: 34e96305,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=0a310626-b0cb-4fd2-9b1a-8b71fae903cc name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:59:01 embed-certs-615980 crio[707]: time="2024-01-16 03:59:01.146928008Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=eac3444f-3e9a-4681-9c5b-c85e3bd5d024 name=/runtime.v1.RuntimeService/Version
	Jan 16 03:59:01 embed-certs-615980 crio[707]: time="2024-01-16 03:59:01.147045918Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=eac3444f-3e9a-4681-9c5b-c85e3bd5d024 name=/runtime.v1.RuntimeService/Version
	Jan 16 03:59:01 embed-certs-615980 crio[707]: time="2024-01-16 03:59:01.148772218Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=c6a65acd-cf28-427e-ad55-dcb56089f84f name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 03:59:01 embed-certs-615980 crio[707]: time="2024-01-16 03:59:01.149420110Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705377541149399065,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=c6a65acd-cf28-427e-ad55-dcb56089f84f name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 03:59:01 embed-certs-615980 crio[707]: time="2024-01-16 03:59:01.150221429Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=643f6542-175f-4839-a070-506f120cfd44 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:59:01 embed-certs-615980 crio[707]: time="2024-01-16 03:59:01.150301588Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=643f6542-175f-4839-a070-506f120cfd44 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:59:01 embed-certs-615980 crio[707]: time="2024-01-16 03:59:01.150468343Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4cdf5e29ffaf0a16aac76782f118de80dd83ad4f7f8c86a00d36f2ca5059e03a,PodSandboxId:27e41ebc0cb158e4c4164f57a67968a4552ce3de1a9cc31a92e74ca580f7667d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705376996954006586,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ce752ad-ce91-462e-ab2b-2af64064eb40,},Annotations:map[string]string{io.kubernetes.container.hash: b9c2ee9a,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e486c660ebfdacd38f1ded1e9f9da19c21269bfec4834fd31aaaf2b6fe8677ca,PodSandboxId:c494e5883ed7f5b4cb9a5a65eea751f339dec149901ac7c06bc62272a1ae106a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705376996364176685,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8rkb5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 322fae38-3b29-4135-ba3f-c0ff8bda1e4a,},Annotations:map[string]string{io.kubernetes.container.hash: 66c29954,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c27b0553e4da0cf3906a0e8a9b50bf1f87dd0217e88a38218aebd11ea0de03fa,PodSandboxId:56da411a5eabd0d5daab31408ff9eee20050a8d5bf8f8b838bf543c9672ae3aa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705376995402642223,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-twbhh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9be49c16-f213-47da-83f4-90fc392eb49e,},Annotations:map[string]string{io.kubernetes.container.hash: 1f34606d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5ec49f0949e9c9671965ab44a04e11962756e944a0ae610596b3e8e8d214341,PodSandboxId:673e669acde408f6a431fa744e47eaf784b377c5b9395afa768ce18832f581c0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705376972471622013,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-615980,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: b185a766c563f6ce9043c8eda28f0d32,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:994d09ee15ce2df74bf9fd5ab55ee26cac0ce20a59cd56abc045ed57a6b95028,PodSandboxId:1ade641d0c13ab07984a9f499bd6af500af15c8a0f383e63a68e05e678e168f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705376972283696718,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-615980,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 9b9f0a8323872d7b759609d60ab95333,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:633707032a417c16dd3e1b01a25542bfafa7810c357c32cd6ecbabd906f016f4,PodSandboxId:d4251834ab94299dacdfaa61339efb08d308b1af1532f243d33472a613672211,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705376972331780188,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-615980,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: e0fd9681c69dd674b431c80253c522fa,},Annotations:map[string]string{io.kubernetes.container.hash: 8d1f459a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1fd9f0e356a8ef8c346cb8ec5851bbf72b404e7c440c94d2bef669a2056a16e,PodSandboxId:81a408a8901fc14eeaf95fd8236b20fe38b27dc4ba6d263626eee3a6d26a0149,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705376971984200087,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-615980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63ef129f8bda6acc00eb7303140250b
9,},Annotations:map[string]string{io.kubernetes.container.hash: 34e96305,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=643f6542-175f-4839-a070-506f120cfd44 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:59:01 embed-certs-615980 crio[707]: time="2024-01-16 03:59:01.190416118Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=77573605-9d25-41f9-83ba-104b3e2c5569 name=/runtime.v1.RuntimeService/Version
	Jan 16 03:59:01 embed-certs-615980 crio[707]: time="2024-01-16 03:59:01.190476317Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=77573605-9d25-41f9-83ba-104b3e2c5569 name=/runtime.v1.RuntimeService/Version
	Jan 16 03:59:01 embed-certs-615980 crio[707]: time="2024-01-16 03:59:01.192261883Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=e95840c2-6e71-4278-92c9-739d4b7b6ec6 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 03:59:01 embed-certs-615980 crio[707]: time="2024-01-16 03:59:01.192638269Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705377541192625326,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=e95840c2-6e71-4278-92c9-739d4b7b6ec6 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 03:59:01 embed-certs-615980 crio[707]: time="2024-01-16 03:59:01.193767305Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=2413c565-4ba8-4422-8eb0-f9617ae68685 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:59:01 embed-certs-615980 crio[707]: time="2024-01-16 03:59:01.193832013Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=2413c565-4ba8-4422-8eb0-f9617ae68685 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 03:59:01 embed-certs-615980 crio[707]: time="2024-01-16 03:59:01.194047972Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4cdf5e29ffaf0a16aac76782f118de80dd83ad4f7f8c86a00d36f2ca5059e03a,PodSandboxId:27e41ebc0cb158e4c4164f57a67968a4552ce3de1a9cc31a92e74ca580f7667d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705376996954006586,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ce752ad-ce91-462e-ab2b-2af64064eb40,},Annotations:map[string]string{io.kubernetes.container.hash: b9c2ee9a,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e486c660ebfdacd38f1ded1e9f9da19c21269bfec4834fd31aaaf2b6fe8677ca,PodSandboxId:c494e5883ed7f5b4cb9a5a65eea751f339dec149901ac7c06bc62272a1ae106a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705376996364176685,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8rkb5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 322fae38-3b29-4135-ba3f-c0ff8bda1e4a,},Annotations:map[string]string{io.kubernetes.container.hash: 66c29954,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c27b0553e4da0cf3906a0e8a9b50bf1f87dd0217e88a38218aebd11ea0de03fa,PodSandboxId:56da411a5eabd0d5daab31408ff9eee20050a8d5bf8f8b838bf543c9672ae3aa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705376995402642223,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-twbhh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9be49c16-f213-47da-83f4-90fc392eb49e,},Annotations:map[string]string{io.kubernetes.container.hash: 1f34606d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5ec49f0949e9c9671965ab44a04e11962756e944a0ae610596b3e8e8d214341,PodSandboxId:673e669acde408f6a431fa744e47eaf784b377c5b9395afa768ce18832f581c0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705376972471622013,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-615980,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: b185a766c563f6ce9043c8eda28f0d32,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:994d09ee15ce2df74bf9fd5ab55ee26cac0ce20a59cd56abc045ed57a6b95028,PodSandboxId:1ade641d0c13ab07984a9f499bd6af500af15c8a0f383e63a68e05e678e168f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705376972283696718,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-615980,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 9b9f0a8323872d7b759609d60ab95333,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:633707032a417c16dd3e1b01a25542bfafa7810c357c32cd6ecbabd906f016f4,PodSandboxId:d4251834ab94299dacdfaa61339efb08d308b1af1532f243d33472a613672211,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705376972331780188,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-615980,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: e0fd9681c69dd674b431c80253c522fa,},Annotations:map[string]string{io.kubernetes.container.hash: 8d1f459a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1fd9f0e356a8ef8c346cb8ec5851bbf72b404e7c440c94d2bef669a2056a16e,PodSandboxId:81a408a8901fc14eeaf95fd8236b20fe38b27dc4ba6d263626eee3a6d26a0149,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705376971984200087,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-615980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63ef129f8bda6acc00eb7303140250b
9,},Annotations:map[string]string{io.kubernetes.container.hash: 34e96305,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=2413c565-4ba8-4422-8eb0-f9617ae68685 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4cdf5e29ffaf0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   27e41ebc0cb15       storage-provisioner
	e486c660ebfda       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   9 minutes ago       Running             kube-proxy                0                   c494e5883ed7f       kube-proxy-8rkb5
	c27b0553e4da0       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   9 minutes ago       Running             coredns                   0                   56da411a5eabd       coredns-5dd5756b68-twbhh
	a5ec49f0949e9       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   9 minutes ago       Running             kube-controller-manager   2                   673e669acde40       kube-controller-manager-embed-certs-615980
	633707032a417       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   9 minutes ago       Running             kube-apiserver            2                   d4251834ab942       kube-apiserver-embed-certs-615980
	994d09ee15ce2       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   9 minutes ago       Running             kube-scheduler            2                   1ade641d0c13a       kube-scheduler-embed-certs-615980
	d1fd9f0e356a8       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   9 minutes ago       Running             etcd                      2                   81a408a8901fc       etcd-embed-certs-615980
	
	
	==> coredns [c27b0553e4da0cf3906a0e8a9b50bf1f87dd0217e88a38218aebd11ea0de03fa] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	[INFO] Reloading complete
	[INFO] 127.0.0.1:55277 - 62806 "HINFO IN 4324491712175631855.61519612609968798. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.010038653s
	
	
	==> describe nodes <==
	Name:               embed-certs-615980
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-615980
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e8fa5f64d0e7272be43ff25ed3826261f0a2578
	                    minikube.k8s.io/name=embed-certs-615980
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_16T03_49_40_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Jan 2024 03:49:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-615980
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Jan 2024 03:58:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Jan 2024 03:55:06 +0000   Tue, 16 Jan 2024 03:49:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Jan 2024 03:55:06 +0000   Tue, 16 Jan 2024 03:49:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Jan 2024 03:55:06 +0000   Tue, 16 Jan 2024 03:49:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Jan 2024 03:55:06 +0000   Tue, 16 Jan 2024 03:49:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.159
	  Hostname:    embed-certs-615980
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 37086489959547d8b681242750c5c6e3
	  System UUID:                37086489-9595-47d8-b681-242750c5c6e3
	  Boot ID:                    05dfe042-8a20-4cf5-b8c2-95e2790cd742
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-twbhh                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m9s
	  kube-system                 etcd-embed-certs-615980                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-apiserver-embed-certs-615980             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-controller-manager-embed-certs-615980    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m23s
	  kube-system                 kube-proxy-8rkb5                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m9s
	  kube-system                 kube-scheduler-embed-certs-615980             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 metrics-server-57f55c9bc5-fc7tx               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m6s
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m4s                   kube-proxy       
	  Normal  NodeAllocatableEnforced  9m31s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m30s (x8 over 9m31s)  kubelet          Node embed-certs-615980 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m30s (x8 over 9m31s)  kubelet          Node embed-certs-615980 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m30s (x7 over 9m31s)  kubelet          Node embed-certs-615980 status is now: NodeHasSufficientPID
	  Normal  Starting                 9m21s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m21s                  kubelet          Node embed-certs-615980 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m21s                  kubelet          Node embed-certs-615980 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m21s                  kubelet          Node embed-certs-615980 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9m9s                   node-controller  Node embed-certs-615980 event: Registered Node embed-certs-615980 in Controller
	
	
	==> dmesg <==
	[Jan16 03:44] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.092142] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.208469] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.631130] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.174184] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.670348] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.305152] systemd-fstab-generator[633]: Ignoring "noauto" for root device
	[  +0.125134] systemd-fstab-generator[644]: Ignoring "noauto" for root device
	[  +0.183589] systemd-fstab-generator[657]: Ignoring "noauto" for root device
	[  +0.128491] systemd-fstab-generator[668]: Ignoring "noauto" for root device
	[  +0.251474] systemd-fstab-generator[692]: Ignoring "noauto" for root device
	[ +17.985200] systemd-fstab-generator[907]: Ignoring "noauto" for root device
	[Jan16 03:45] kauditd_printk_skb: 29 callbacks suppressed
	[Jan16 03:49] systemd-fstab-generator[3517]: Ignoring "noauto" for root device
	[  +9.828605] systemd-fstab-generator[3847]: Ignoring "noauto" for root device
	[ +12.901501] kauditd_printk_skb: 2 callbacks suppressed
	[Jan16 03:50] kauditd_printk_skb: 9 callbacks suppressed
	
	
	==> etcd [d1fd9f0e356a8ef8c346cb8ec5851bbf72b404e7c440c94d2bef669a2056a16e] <==
	{"level":"info","ts":"2024-01-16T03:49:33.534722Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d718283c8ba9c288 switched to configuration voters=(15499182358101869192)"}
	{"level":"info","ts":"2024-01-16T03:49:33.534889Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f0e35e647fe17a2","local-member-id":"d718283c8ba9c288","added-peer-id":"d718283c8ba9c288","added-peer-peer-urls":["https://192.168.72.159:2380"]}
	{"level":"info","ts":"2024-01-16T03:49:33.543224Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-01-16T03:49:33.543524Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"d718283c8ba9c288","initial-advertise-peer-urls":["https://192.168.72.159:2380"],"listen-peer-urls":["https://192.168.72.159:2380"],"advertise-client-urls":["https://192.168.72.159:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.159:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-01-16T03:49:33.543559Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-01-16T03:49:33.54369Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.72.159:2380"}
	{"level":"info","ts":"2024-01-16T03:49:33.543701Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.72.159:2380"}
	{"level":"info","ts":"2024-01-16T03:49:34.479252Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d718283c8ba9c288 is starting a new election at term 1"}
	{"level":"info","ts":"2024-01-16T03:49:34.479319Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d718283c8ba9c288 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-01-16T03:49:34.47935Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d718283c8ba9c288 received MsgPreVoteResp from d718283c8ba9c288 at term 1"}
	{"level":"info","ts":"2024-01-16T03:49:34.479364Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d718283c8ba9c288 became candidate at term 2"}
	{"level":"info","ts":"2024-01-16T03:49:34.47937Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d718283c8ba9c288 received MsgVoteResp from d718283c8ba9c288 at term 2"}
	{"level":"info","ts":"2024-01-16T03:49:34.479378Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d718283c8ba9c288 became leader at term 2"}
	{"level":"info","ts":"2024-01-16T03:49:34.479385Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d718283c8ba9c288 elected leader d718283c8ba9c288 at term 2"}
	{"level":"info","ts":"2024-01-16T03:49:34.48425Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-16T03:49:34.488464Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"d718283c8ba9c288","local-member-attributes":"{Name:embed-certs-615980 ClientURLs:[https://192.168.72.159:2379]}","request-path":"/0/members/d718283c8ba9c288/attributes","cluster-id":"6f0e35e647fe17a2","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-16T03:49:34.488676Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-16T03:49:34.490097Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.159:2379"}
	{"level":"info","ts":"2024-01-16T03:49:34.490219Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f0e35e647fe17a2","local-member-id":"d718283c8ba9c288","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-16T03:49:34.490353Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-16T03:49:34.490416Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-16T03:49:34.490713Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-16T03:49:34.491728Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-16T03:49:34.512274Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-16T03:49:34.512368Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 03:59:01 up 14 min,  0 users,  load average: 0.11, 0.17, 0.16
	Linux embed-certs-615980 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [633707032a417c16dd3e1b01a25542bfafa7810c357c32cd6ecbabd906f016f4] <==
	W0116 03:54:37.683189       1 handler_proxy.go:93] no RequestInfo found in the context
	E0116 03:54:37.683272       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0116 03:54:37.683284       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0116 03:54:37.683354       1 handler_proxy.go:93] no RequestInfo found in the context
	E0116 03:54:37.683497       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0116 03:54:37.684912       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0116 03:55:36.561716       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0116 03:55:37.683818       1 handler_proxy.go:93] no RequestInfo found in the context
	E0116 03:55:37.684004       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0116 03:55:37.684043       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0116 03:55:37.685295       1 handler_proxy.go:93] no RequestInfo found in the context
	E0116 03:55:37.685405       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0116 03:55:37.685432       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0116 03:56:36.561200       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0116 03:57:36.561578       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0116 03:57:37.685218       1 handler_proxy.go:93] no RequestInfo found in the context
	E0116 03:57:37.685376       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0116 03:57:37.685408       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0116 03:57:37.686454       1 handler_proxy.go:93] no RequestInfo found in the context
	E0116 03:57:37.686570       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0116 03:57:37.686582       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0116 03:58:36.561905       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	
	==> kube-controller-manager [a5ec49f0949e9c9671965ab44a04e11962756e944a0ae610596b3e8e8d214341] <==
	I0116 03:53:25.477641       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="142.101µs"
	E0116 03:53:52.174870       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:53:52.649399       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 03:54:22.182451       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:54:22.660847       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 03:54:52.194258       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:54:52.671223       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 03:55:22.201457       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:55:22.682398       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0116 03:55:51.475542       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="398.472µs"
	E0116 03:55:52.208957       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:55:52.697804       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0116 03:56:04.472819       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="250.698µs"
	E0116 03:56:22.215285       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:56:22.708909       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 03:56:52.221586       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:56:52.719383       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 03:57:22.227928       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:57:22.730424       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 03:57:52.234730       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:57:52.740056       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 03:58:22.241027       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:58:22.753422       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 03:58:52.247772       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:58:52.769160       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [e486c660ebfdacd38f1ded1e9f9da19c21269bfec4834fd31aaaf2b6fe8677ca] <==
	I0116 03:49:57.307049       1 server_others.go:69] "Using iptables proxy"
	I0116 03:49:57.330887       1 node.go:141] Successfully retrieved node IP: 192.168.72.159
	I0116 03:49:57.389095       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0116 03:49:57.389234       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0116 03:49:57.392488       1 server_others.go:152] "Using iptables Proxier"
	I0116 03:49:57.393098       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0116 03:49:57.393446       1 server.go:846] "Version info" version="v1.28.4"
	I0116 03:49:57.393734       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0116 03:49:57.395849       1 config.go:188] "Starting service config controller"
	I0116 03:49:57.396793       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0116 03:49:57.397065       1 config.go:97] "Starting endpoint slice config controller"
	I0116 03:49:57.397173       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0116 03:49:57.398738       1 config.go:315] "Starting node config controller"
	I0116 03:49:57.398780       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0116 03:49:57.497648       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0116 03:49:57.497714       1 shared_informer.go:318] Caches are synced for service config
	I0116 03:49:57.498828       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [994d09ee15ce2df74bf9fd5ab55ee26cac0ce20a59cd56abc045ed57a6b95028] <==
	W0116 03:49:37.657346       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0116 03:49:37.657444       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0116 03:49:37.736677       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0116 03:49:37.736843       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0116 03:49:37.738590       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0116 03:49:37.738647       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0116 03:49:37.758548       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0116 03:49:37.758614       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0116 03:49:37.857621       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0116 03:49:37.857758       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0116 03:49:37.895101       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0116 03:49:37.895307       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0116 03:49:37.943032       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0116 03:49:37.943205       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0116 03:49:37.972383       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0116 03:49:37.972482       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0116 03:49:37.999715       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0116 03:49:37.999826       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0116 03:49:38.068923       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0116 03:49:38.069017       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0116 03:49:38.126926       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0116 03:49:38.127040       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0116 03:49:38.140637       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0116 03:49:38.140737       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0116 03:49:39.598786       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-01-16 03:44:23 UTC, ends at Tue 2024-01-16 03:59:01 UTC. --
	Jan 16 03:56:17 embed-certs-615980 kubelet[3854]: E0116 03:56:17.450357    3854 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fc7tx" podUID="14a38c13-7a9e-4548-9654-c568ede29e0f"
	Jan 16 03:56:31 embed-certs-615980 kubelet[3854]: E0116 03:56:31.450169    3854 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fc7tx" podUID="14a38c13-7a9e-4548-9654-c568ede29e0f"
	Jan 16 03:56:40 embed-certs-615980 kubelet[3854]: E0116 03:56:40.526719    3854 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 16 03:56:40 embed-certs-615980 kubelet[3854]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 16 03:56:40 embed-certs-615980 kubelet[3854]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 16 03:56:40 embed-certs-615980 kubelet[3854]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 16 03:56:46 embed-certs-615980 kubelet[3854]: E0116 03:56:46.451503    3854 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fc7tx" podUID="14a38c13-7a9e-4548-9654-c568ede29e0f"
	Jan 16 03:56:59 embed-certs-615980 kubelet[3854]: E0116 03:56:59.449455    3854 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fc7tx" podUID="14a38c13-7a9e-4548-9654-c568ede29e0f"
	Jan 16 03:57:10 embed-certs-615980 kubelet[3854]: E0116 03:57:10.451027    3854 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fc7tx" podUID="14a38c13-7a9e-4548-9654-c568ede29e0f"
	Jan 16 03:57:22 embed-certs-615980 kubelet[3854]: E0116 03:57:22.450733    3854 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fc7tx" podUID="14a38c13-7a9e-4548-9654-c568ede29e0f"
	Jan 16 03:57:37 embed-certs-615980 kubelet[3854]: E0116 03:57:37.450792    3854 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fc7tx" podUID="14a38c13-7a9e-4548-9654-c568ede29e0f"
	Jan 16 03:57:40 embed-certs-615980 kubelet[3854]: E0116 03:57:40.526433    3854 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 16 03:57:40 embed-certs-615980 kubelet[3854]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 16 03:57:40 embed-certs-615980 kubelet[3854]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 16 03:57:40 embed-certs-615980 kubelet[3854]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 16 03:57:49 embed-certs-615980 kubelet[3854]: E0116 03:57:49.450066    3854 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fc7tx" podUID="14a38c13-7a9e-4548-9654-c568ede29e0f"
	Jan 16 03:58:02 embed-certs-615980 kubelet[3854]: E0116 03:58:02.451781    3854 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fc7tx" podUID="14a38c13-7a9e-4548-9654-c568ede29e0f"
	Jan 16 03:58:13 embed-certs-615980 kubelet[3854]: E0116 03:58:13.450561    3854 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fc7tx" podUID="14a38c13-7a9e-4548-9654-c568ede29e0f"
	Jan 16 03:58:28 embed-certs-615980 kubelet[3854]: E0116 03:58:28.449574    3854 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fc7tx" podUID="14a38c13-7a9e-4548-9654-c568ede29e0f"
	Jan 16 03:58:40 embed-certs-615980 kubelet[3854]: E0116 03:58:40.450351    3854 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fc7tx" podUID="14a38c13-7a9e-4548-9654-c568ede29e0f"
	Jan 16 03:58:40 embed-certs-615980 kubelet[3854]: E0116 03:58:40.526378    3854 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 16 03:58:40 embed-certs-615980 kubelet[3854]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 16 03:58:40 embed-certs-615980 kubelet[3854]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 16 03:58:40 embed-certs-615980 kubelet[3854]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 16 03:58:54 embed-certs-615980 kubelet[3854]: E0116 03:58:54.450645    3854 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fc7tx" podUID="14a38c13-7a9e-4548-9654-c568ede29e0f"
	
	
	==> storage-provisioner [4cdf5e29ffaf0a16aac76782f118de80dd83ad4f7f8c86a00d36f2ca5059e03a] <==
	I0116 03:49:57.183870       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0116 03:49:57.212307       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0116 03:49:57.212411       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0116 03:49:57.228568       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0116 03:49:57.230108       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-615980_67d79c8d-fe8f-4708-af27-dd948672dc91!
	I0116 03:49:57.238842       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"96a2d4c1-0420-4551-81c2-61a9af9a83b8", APIVersion:"v1", ResourceVersion:"453", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-615980_67d79c8d-fe8f-4708-af27-dd948672dc91 became leader
	I0116 03:49:57.332323       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-615980_67d79c8d-fe8f-4708-af27-dd948672dc91!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-615980 -n embed-certs-615980
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-615980 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-fc7tx
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-615980 describe pod metrics-server-57f55c9bc5-fc7tx
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-615980 describe pod metrics-server-57f55c9bc5-fc7tx: exit status 1 (80.450755ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-fc7tx" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-615980 describe pod metrics-server-57f55c9bc5-fc7tx: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (543.53s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0116 03:51:49.160598  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/functional-193417/client.crt: no such file or directory
E0116 03:52:19.245954  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/addons-690916/client.crt: no such file or directory
E0116 03:53:42.295725  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/addons-690916/client.crt: no such file or directory
E0116 03:54:18.183588  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/ingress-addon-legacy-873808/client.crt: no such file or directory
E0116 03:56:49.161465  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/functional-193417/client.crt: no such file or directory
E0116 03:57:19.246610  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/addons-690916/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-696770 -n old-k8s-version-696770
start_stop_delete_test.go:274: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-01-16 04:00:04.448056011 +0000 UTC m=+5148.386278619
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-696770 -n old-k8s-version-696770
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-696770 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-696770 logs -n 25: (1.777225879s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p no-preload-666547                                   | no-preload-666547            | jenkins | v1.32.0 | 16 Jan 24 03:33 UTC | 16 Jan 24 03:35 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| ssh     | cert-options-977008 ssh                                | cert-options-977008          | jenkins | v1.32.0 | 16 Jan 24 03:34 UTC | 16 Jan 24 03:34 UTC |
	|         | openssl x509 -text -noout -in                          |                              |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                              |         |         |                     |                     |
	| ssh     | -p cert-options-977008 -- sudo                         | cert-options-977008          | jenkins | v1.32.0 | 16 Jan 24 03:34 UTC | 16 Jan 24 03:34 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-977008                                 | cert-options-977008          | jenkins | v1.32.0 | 16 Jan 24 03:34 UTC | 16 Jan 24 03:34 UTC |
	| start   | -p embed-certs-615980                                  | embed-certs-615980           | jenkins | v1.32.0 | 16 Jan 24 03:34 UTC | 16 Jan 24 03:35 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-690771                              | cert-expiration-690771       | jenkins | v1.32.0 | 16 Jan 24 03:35 UTC | 16 Jan 24 03:36 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-615980            | embed-certs-615980           | jenkins | v1.32.0 | 16 Jan 24 03:35 UTC | 16 Jan 24 03:35 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-615980                                  | embed-certs-615980           | jenkins | v1.32.0 | 16 Jan 24 03:35 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-666547             | no-preload-666547            | jenkins | v1.32.0 | 16 Jan 24 03:35 UTC | 16 Jan 24 03:35 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-666547                                   | no-preload-666547            | jenkins | v1.32.0 | 16 Jan 24 03:35 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-696770        | old-k8s-version-696770       | jenkins | v1.32.0 | 16 Jan 24 03:36 UTC | 16 Jan 24 03:36 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-696770                              | old-k8s-version-696770       | jenkins | v1.32.0 | 16 Jan 24 03:36 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-690771                              | cert-expiration-690771       | jenkins | v1.32.0 | 16 Jan 24 03:36 UTC | 16 Jan 24 03:36 UTC |
	| delete  | -p                                                     | disable-driver-mounts-673948 | jenkins | v1.32.0 | 16 Jan 24 03:36 UTC | 16 Jan 24 03:36 UTC |
	|         | disable-driver-mounts-673948                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-434445 | jenkins | v1.32.0 | 16 Jan 24 03:36 UTC | 16 Jan 24 03:37 UTC |
	|         | default-k8s-diff-port-434445                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-434445  | default-k8s-diff-port-434445 | jenkins | v1.32.0 | 16 Jan 24 03:37 UTC | 16 Jan 24 03:37 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-434445 | jenkins | v1.32.0 | 16 Jan 24 03:37 UTC |                     |
	|         | default-k8s-diff-port-434445                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-615980                 | embed-certs-615980           | jenkins | v1.32.0 | 16 Jan 24 03:38 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-666547                  | no-preload-666547            | jenkins | v1.32.0 | 16 Jan 24 03:38 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-615980                                  | embed-certs-615980           | jenkins | v1.32.0 | 16 Jan 24 03:38 UTC | 16 Jan 24 03:49 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p no-preload-666547                                   | no-preload-666547            | jenkins | v1.32.0 | 16 Jan 24 03:38 UTC | 16 Jan 24 03:48 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-696770             | old-k8s-version-696770       | jenkins | v1.32.0 | 16 Jan 24 03:38 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-696770                              | old-k8s-version-696770       | jenkins | v1.32.0 | 16 Jan 24 03:38 UTC | 16 Jan 24 03:51 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-434445       | default-k8s-diff-port-434445 | jenkins | v1.32.0 | 16 Jan 24 03:40 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-434445 | jenkins | v1.32.0 | 16 Jan 24 03:40 UTC | 16 Jan 24 03:49 UTC |
	|         | default-k8s-diff-port-434445                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/16 03:40:16
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0116 03:40:16.605622  507889 out.go:296] Setting OutFile to fd 1 ...
	I0116 03:40:16.605883  507889 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:40:16.605892  507889 out.go:309] Setting ErrFile to fd 2...
	I0116 03:40:16.605897  507889 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:40:16.606102  507889 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17965-468241/.minikube/bin
	I0116 03:40:16.606721  507889 out.go:303] Setting JSON to false
	I0116 03:40:16.607781  507889 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":15769,"bootTime":1705360648,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0116 03:40:16.607865  507889 start.go:138] virtualization: kvm guest
	I0116 03:40:16.610269  507889 out.go:177] * [default-k8s-diff-port-434445] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0116 03:40:16.611862  507889 out.go:177]   - MINIKUBE_LOCATION=17965
	I0116 03:40:16.611954  507889 notify.go:220] Checking for updates...
	I0116 03:40:16.613306  507889 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 03:40:16.615094  507889 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17965-468241/kubeconfig
	I0116 03:40:16.617044  507889 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17965-468241/.minikube
	I0116 03:40:16.618932  507889 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0116 03:40:16.621159  507889 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0116 03:40:16.623616  507889 config.go:182] Loaded profile config "default-k8s-diff-port-434445": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 03:40:16.624273  507889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:40:16.624363  507889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:40:16.640065  507889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45633
	I0116 03:40:16.640642  507889 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:40:16.641273  507889 main.go:141] libmachine: Using API Version  1
	I0116 03:40:16.641301  507889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:40:16.641696  507889 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:40:16.641901  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .DriverName
	I0116 03:40:16.642227  507889 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 03:40:16.642599  507889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:40:16.642684  507889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:40:16.658198  507889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41877
	I0116 03:40:16.658657  507889 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:40:16.659207  507889 main.go:141] libmachine: Using API Version  1
	I0116 03:40:16.659233  507889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:40:16.659588  507889 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:40:16.659844  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .DriverName
	I0116 03:40:16.698770  507889 out.go:177] * Using the kvm2 driver based on existing profile
	I0116 03:40:16.700307  507889 start.go:298] selected driver: kvm2
	I0116 03:40:16.700330  507889 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-434445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-434445 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.50.236 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 03:40:16.700478  507889 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0116 03:40:16.701296  507889 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 03:40:16.701389  507889 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17965-468241/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0116 03:40:16.717988  507889 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0116 03:40:16.718426  507889 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0116 03:40:16.718515  507889 cni.go:84] Creating CNI manager for ""
	I0116 03:40:16.718532  507889 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:40:16.718547  507889 start_flags.go:321] config:
	{Name:default-k8s-diff-port-434445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-43444
5 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.50.236 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 03:40:16.718765  507889 iso.go:125] acquiring lock: {Name:mk2f3231f0eeeb23816ac363851489181398d1c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 03:40:16.721292  507889 out.go:177] * Starting control plane node default-k8s-diff-port-434445 in cluster default-k8s-diff-port-434445
	I0116 03:40:16.722858  507889 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 03:40:16.722928  507889 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0116 03:40:16.722942  507889 cache.go:56] Caching tarball of preloaded images
	I0116 03:40:16.723044  507889 preload.go:174] Found /home/jenkins/minikube-integration/17965-468241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0116 03:40:16.723057  507889 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0116 03:40:16.723243  507889 profile.go:148] Saving config to /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/default-k8s-diff-port-434445/config.json ...
	I0116 03:40:16.723502  507889 start.go:365] acquiring machines lock for default-k8s-diff-port-434445: {Name:mk901e5fae8c1a578d989b520053c709bdbdcf06 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0116 03:40:22.140399  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:40:25.212385  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:40:31.292386  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:40:34.364375  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:40:40.444398  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:40:43.516372  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:40:49.596388  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:40:52.668394  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:40:58.748342  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:41:01.820436  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:41:07.900338  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:41:10.972410  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:41:17.052384  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:41:20.124427  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:41:26.204371  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:41:29.276361  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:41:35.356391  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:41:38.428383  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:41:44.508324  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:41:47.580377  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:41:53.660360  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:41:56.732377  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:42:02.812345  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:42:05.884406  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:42:11.964398  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:42:15.036469  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:42:21.116391  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:42:24.188397  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:42:30.268400  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:42:33.340416  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:42:39.420405  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:42:42.492396  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:42:48.572396  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:42:51.644367  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:42:57.724419  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:43:00.796427  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:43:03.800669  507339 start.go:369] acquired machines lock for "no-preload-666547" in 4m33.073406767s
	I0116 03:43:03.800732  507339 start.go:96] Skipping create...Using existing machine configuration
	I0116 03:43:03.800744  507339 fix.go:54] fixHost starting: 
	I0116 03:43:03.801330  507339 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:43:03.801381  507339 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:43:03.817309  507339 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46189
	I0116 03:43:03.817841  507339 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:43:03.818376  507339 main.go:141] libmachine: Using API Version  1
	I0116 03:43:03.818403  507339 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:43:03.818801  507339 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:43:03.819049  507339 main.go:141] libmachine: (no-preload-666547) Calling .DriverName
	I0116 03:43:03.819206  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetState
	I0116 03:43:03.821006  507339 fix.go:102] recreateIfNeeded on no-preload-666547: state=Stopped err=<nil>
	I0116 03:43:03.821031  507339 main.go:141] libmachine: (no-preload-666547) Calling .DriverName
	W0116 03:43:03.821210  507339 fix.go:128] unexpected machine state, will restart: <nil>
	I0116 03:43:03.823341  507339 out.go:177] * Restarting existing kvm2 VM for "no-preload-666547" ...
	I0116 03:43:03.824887  507339 main.go:141] libmachine: (no-preload-666547) Calling .Start
	I0116 03:43:03.825070  507339 main.go:141] libmachine: (no-preload-666547) Ensuring networks are active...
	I0116 03:43:03.825806  507339 main.go:141] libmachine: (no-preload-666547) Ensuring network default is active
	I0116 03:43:03.826151  507339 main.go:141] libmachine: (no-preload-666547) Ensuring network mk-no-preload-666547 is active
	I0116 03:43:03.826549  507339 main.go:141] libmachine: (no-preload-666547) Getting domain xml...
	I0116 03:43:03.827209  507339 main.go:141] libmachine: (no-preload-666547) Creating domain...
	I0116 03:43:04.166757  507339 main.go:141] libmachine: (no-preload-666547) Waiting to get IP...
	I0116 03:43:04.167846  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:04.168294  507339 main.go:141] libmachine: (no-preload-666547) DBG | unable to find current IP address of domain no-preload-666547 in network mk-no-preload-666547
	I0116 03:43:04.168400  507339 main.go:141] libmachine: (no-preload-666547) DBG | I0116 03:43:04.168281  508330 retry.go:31] will retry after 236.684347ms: waiting for machine to come up
	I0116 03:43:04.407068  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:04.407590  507339 main.go:141] libmachine: (no-preload-666547) DBG | unable to find current IP address of domain no-preload-666547 in network mk-no-preload-666547
	I0116 03:43:04.407626  507339 main.go:141] libmachine: (no-preload-666547) DBG | I0116 03:43:04.407520  508330 retry.go:31] will retry after 273.512454ms: waiting for machine to come up
	I0116 03:43:04.683173  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:04.683724  507339 main.go:141] libmachine: (no-preload-666547) DBG | unable to find current IP address of domain no-preload-666547 in network mk-no-preload-666547
	I0116 03:43:04.683759  507339 main.go:141] libmachine: (no-preload-666547) DBG | I0116 03:43:04.683652  508330 retry.go:31] will retry after 404.396132ms: waiting for machine to come up
	I0116 03:43:05.089306  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:05.089659  507339 main.go:141] libmachine: (no-preload-666547) DBG | unable to find current IP address of domain no-preload-666547 in network mk-no-preload-666547
	I0116 03:43:05.089687  507339 main.go:141] libmachine: (no-preload-666547) DBG | I0116 03:43:05.089612  508330 retry.go:31] will retry after 373.291662ms: waiting for machine to come up
	I0116 03:43:05.464216  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:05.464745  507339 main.go:141] libmachine: (no-preload-666547) DBG | unable to find current IP address of domain no-preload-666547 in network mk-no-preload-666547
	I0116 03:43:05.464772  507339 main.go:141] libmachine: (no-preload-666547) DBG | I0116 03:43:05.464696  508330 retry.go:31] will retry after 509.048348ms: waiting for machine to come up
	I0116 03:43:03.798483  507257 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 03:43:03.798553  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHHostname
	I0116 03:43:03.800507  507257 machine.go:91] provisioned docker machine in 4m37.39429533s
	I0116 03:43:03.800559  507257 fix.go:56] fixHost completed within 4m37.41769564s
	I0116 03:43:03.800568  507257 start.go:83] releasing machines lock for "embed-certs-615980", held for 4m37.417718822s
	W0116 03:43:03.800599  507257 start.go:694] error starting host: provision: host is not running
	W0116 03:43:03.800747  507257 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0116 03:43:03.800759  507257 start.go:709] Will try again in 5 seconds ...
	I0116 03:43:05.975369  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:05.975831  507339 main.go:141] libmachine: (no-preload-666547) DBG | unable to find current IP address of domain no-preload-666547 in network mk-no-preload-666547
	I0116 03:43:05.975864  507339 main.go:141] libmachine: (no-preload-666547) DBG | I0116 03:43:05.975776  508330 retry.go:31] will retry after 631.077965ms: waiting for machine to come up
	I0116 03:43:06.608722  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:06.609133  507339 main.go:141] libmachine: (no-preload-666547) DBG | unable to find current IP address of domain no-preload-666547 in network mk-no-preload-666547
	I0116 03:43:06.609162  507339 main.go:141] libmachine: (no-preload-666547) DBG | I0116 03:43:06.609074  508330 retry.go:31] will retry after 1.047586363s: waiting for machine to come up
	I0116 03:43:07.658264  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:07.658645  507339 main.go:141] libmachine: (no-preload-666547) DBG | unable to find current IP address of domain no-preload-666547 in network mk-no-preload-666547
	I0116 03:43:07.658696  507339 main.go:141] libmachine: (no-preload-666547) DBG | I0116 03:43:07.658591  508330 retry.go:31] will retry after 1.038644854s: waiting for machine to come up
	I0116 03:43:08.698946  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:08.699384  507339 main.go:141] libmachine: (no-preload-666547) DBG | unable to find current IP address of domain no-preload-666547 in network mk-no-preload-666547
	I0116 03:43:08.699411  507339 main.go:141] libmachine: (no-preload-666547) DBG | I0116 03:43:08.699347  508330 retry.go:31] will retry after 1.362032973s: waiting for machine to come up
	I0116 03:43:10.063269  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:10.063764  507339 main.go:141] libmachine: (no-preload-666547) DBG | unable to find current IP address of domain no-preload-666547 in network mk-no-preload-666547
	I0116 03:43:10.063792  507339 main.go:141] libmachine: (no-preload-666547) DBG | I0116 03:43:10.063714  508330 retry.go:31] will retry after 1.432317286s: waiting for machine to come up
	I0116 03:43:08.802803  507257 start.go:365] acquiring machines lock for embed-certs-615980: {Name:mk901e5fae8c1a578d989b520053c709bdbdcf06 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0116 03:43:11.498235  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:11.498714  507339 main.go:141] libmachine: (no-preload-666547) DBG | unable to find current IP address of domain no-preload-666547 in network mk-no-preload-666547
	I0116 03:43:11.498748  507339 main.go:141] libmachine: (no-preload-666547) DBG | I0116 03:43:11.498650  508330 retry.go:31] will retry after 2.490630326s: waiting for machine to come up
	I0116 03:43:13.991256  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:13.991717  507339 main.go:141] libmachine: (no-preload-666547) DBG | unable to find current IP address of domain no-preload-666547 in network mk-no-preload-666547
	I0116 03:43:13.991752  507339 main.go:141] libmachine: (no-preload-666547) DBG | I0116 03:43:13.991662  508330 retry.go:31] will retry after 3.569049736s: waiting for machine to come up
	I0116 03:43:17.565524  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:17.565893  507339 main.go:141] libmachine: (no-preload-666547) DBG | unable to find current IP address of domain no-preload-666547 in network mk-no-preload-666547
	I0116 03:43:17.565916  507339 main.go:141] libmachine: (no-preload-666547) DBG | I0116 03:43:17.565850  508330 retry.go:31] will retry after 2.875259098s: waiting for machine to come up
	I0116 03:43:20.443998  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:20.444472  507339 main.go:141] libmachine: (no-preload-666547) DBG | unable to find current IP address of domain no-preload-666547 in network mk-no-preload-666547
	I0116 03:43:20.444495  507339 main.go:141] libmachine: (no-preload-666547) DBG | I0116 03:43:20.444438  508330 retry.go:31] will retry after 4.319647454s: waiting for machine to come up
	I0116 03:43:24.765311  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:24.765836  507339 main.go:141] libmachine: (no-preload-666547) Found IP for machine: 192.168.39.103
	I0116 03:43:24.765862  507339 main.go:141] libmachine: (no-preload-666547) Reserving static IP address...
	I0116 03:43:24.765879  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has current primary IP address 192.168.39.103 and MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:24.766413  507339 main.go:141] libmachine: (no-preload-666547) DBG | found host DHCP lease matching {name: "no-preload-666547", mac: "52:54:00:4e:5f:03", ip: "192.168.39.103"} in network mk-no-preload-666547: {Iface:virbr2 ExpiryTime:2024-01-16 04:43:15 +0000 UTC Type:0 Mac:52:54:00:4e:5f:03 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:no-preload-666547 Clientid:01:52:54:00:4e:5f:03}
	I0116 03:43:24.766543  507339 main.go:141] libmachine: (no-preload-666547) Reserved static IP address: 192.168.39.103
	I0116 03:43:24.766575  507339 main.go:141] libmachine: (no-preload-666547) DBG | skip adding static IP to network mk-no-preload-666547 - found existing host DHCP lease matching {name: "no-preload-666547", mac: "52:54:00:4e:5f:03", ip: "192.168.39.103"}
	I0116 03:43:24.766593  507339 main.go:141] libmachine: (no-preload-666547) DBG | Getting to WaitForSSH function...
	I0116 03:43:24.766607  507339 main.go:141] libmachine: (no-preload-666547) Waiting for SSH to be available...
	I0116 03:43:24.768801  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:24.769145  507339 main.go:141] libmachine: (no-preload-666547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:5f:03", ip: ""} in network mk-no-preload-666547: {Iface:virbr2 ExpiryTime:2024-01-16 04:43:15 +0000 UTC Type:0 Mac:52:54:00:4e:5f:03 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:no-preload-666547 Clientid:01:52:54:00:4e:5f:03}
	I0116 03:43:24.769180  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined IP address 192.168.39.103 and MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:24.769392  507339 main.go:141] libmachine: (no-preload-666547) DBG | Using SSH client type: external
	I0116 03:43:24.769446  507339 main.go:141] libmachine: (no-preload-666547) DBG | Using SSH private key: /home/jenkins/minikube-integration/17965-468241/.minikube/machines/no-preload-666547/id_rsa (-rw-------)
	I0116 03:43:24.769490  507339 main.go:141] libmachine: (no-preload-666547) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.103 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17965-468241/.minikube/machines/no-preload-666547/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0116 03:43:24.769539  507339 main.go:141] libmachine: (no-preload-666547) DBG | About to run SSH command:
	I0116 03:43:24.769557  507339 main.go:141] libmachine: (no-preload-666547) DBG | exit 0
	I0116 03:43:24.860928  507339 main.go:141] libmachine: (no-preload-666547) DBG | SSH cmd err, output: <nil>: 
	I0116 03:43:24.861324  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetConfigRaw
	I0116 03:43:24.862217  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetIP
	I0116 03:43:24.865100  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:24.865468  507339 main.go:141] libmachine: (no-preload-666547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:5f:03", ip: ""} in network mk-no-preload-666547: {Iface:virbr2 ExpiryTime:2024-01-16 04:43:15 +0000 UTC Type:0 Mac:52:54:00:4e:5f:03 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:no-preload-666547 Clientid:01:52:54:00:4e:5f:03}
	I0116 03:43:24.865503  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined IP address 192.168.39.103 and MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:24.865804  507339 profile.go:148] Saving config to /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/no-preload-666547/config.json ...
	I0116 03:43:24.866064  507339 machine.go:88] provisioning docker machine ...
	I0116 03:43:24.866091  507339 main.go:141] libmachine: (no-preload-666547) Calling .DriverName
	I0116 03:43:24.866374  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetMachineName
	I0116 03:43:24.866590  507339 buildroot.go:166] provisioning hostname "no-preload-666547"
	I0116 03:43:24.866613  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetMachineName
	I0116 03:43:24.866795  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHHostname
	I0116 03:43:24.869231  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:24.869587  507339 main.go:141] libmachine: (no-preload-666547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:5f:03", ip: ""} in network mk-no-preload-666547: {Iface:virbr2 ExpiryTime:2024-01-16 04:43:15 +0000 UTC Type:0 Mac:52:54:00:4e:5f:03 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:no-preload-666547 Clientid:01:52:54:00:4e:5f:03}
	I0116 03:43:24.869623  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined IP address 192.168.39.103 and MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:24.869778  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHPort
	I0116 03:43:24.870002  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHKeyPath
	I0116 03:43:24.870168  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHKeyPath
	I0116 03:43:24.870304  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHUsername
	I0116 03:43:24.870455  507339 main.go:141] libmachine: Using SSH client type: native
	I0116 03:43:24.870929  507339 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I0116 03:43:24.870949  507339 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-666547 && echo "no-preload-666547" | sudo tee /etc/hostname
	I0116 03:43:25.005390  507339 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-666547
	
	I0116 03:43:25.005425  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHHostname
	I0116 03:43:25.008410  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:25.008801  507339 main.go:141] libmachine: (no-preload-666547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:5f:03", ip: ""} in network mk-no-preload-666547: {Iface:virbr2 ExpiryTime:2024-01-16 04:43:15 +0000 UTC Type:0 Mac:52:54:00:4e:5f:03 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:no-preload-666547 Clientid:01:52:54:00:4e:5f:03}
	I0116 03:43:25.008836  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined IP address 192.168.39.103 and MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:25.009007  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHPort
	I0116 03:43:25.009269  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHKeyPath
	I0116 03:43:25.009432  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHKeyPath
	I0116 03:43:25.009561  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHUsername
	I0116 03:43:25.009722  507339 main.go:141] libmachine: Using SSH client type: native
	I0116 03:43:25.010051  507339 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I0116 03:43:25.010071  507339 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-666547' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-666547/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-666547' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 03:43:25.142889  507339 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 03:43:25.142928  507339 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17965-468241/.minikube CaCertPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17965-468241/.minikube}
	I0116 03:43:25.142950  507339 buildroot.go:174] setting up certificates
	I0116 03:43:25.142963  507339 provision.go:83] configureAuth start
	I0116 03:43:25.142973  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetMachineName
	I0116 03:43:25.143294  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetIP
	I0116 03:43:25.146355  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:25.146746  507339 main.go:141] libmachine: (no-preload-666547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:5f:03", ip: ""} in network mk-no-preload-666547: {Iface:virbr2 ExpiryTime:2024-01-16 04:43:15 +0000 UTC Type:0 Mac:52:54:00:4e:5f:03 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:no-preload-666547 Clientid:01:52:54:00:4e:5f:03}
	I0116 03:43:25.146767  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined IP address 192.168.39.103 and MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:25.147063  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHHostname
	I0116 03:43:25.149867  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:25.150231  507339 main.go:141] libmachine: (no-preload-666547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:5f:03", ip: ""} in network mk-no-preload-666547: {Iface:virbr2 ExpiryTime:2024-01-16 04:43:15 +0000 UTC Type:0 Mac:52:54:00:4e:5f:03 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:no-preload-666547 Clientid:01:52:54:00:4e:5f:03}
	I0116 03:43:25.150260  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined IP address 192.168.39.103 and MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:25.150448  507339 provision.go:138] copyHostCerts
	I0116 03:43:25.150531  507339 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem, removing ...
	I0116 03:43:25.150543  507339 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem
	I0116 03:43:25.150618  507339 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem (1123 bytes)
	I0116 03:43:25.150719  507339 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem, removing ...
	I0116 03:43:25.150729  507339 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem
	I0116 03:43:25.150755  507339 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem (1679 bytes)
	I0116 03:43:25.150815  507339 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem, removing ...
	I0116 03:43:25.150823  507339 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem
	I0116 03:43:25.150843  507339 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem (1078 bytes)
	I0116 03:43:25.150888  507339 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca-key.pem org=jenkins.no-preload-666547 san=[192.168.39.103 192.168.39.103 localhost 127.0.0.1 minikube no-preload-666547]
	I0116 03:43:25.417982  507339 provision.go:172] copyRemoteCerts
	I0116 03:43:25.418059  507339 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 03:43:25.418088  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHHostname
	I0116 03:43:25.420908  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:25.421196  507339 main.go:141] libmachine: (no-preload-666547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:5f:03", ip: ""} in network mk-no-preload-666547: {Iface:virbr2 ExpiryTime:2024-01-16 04:43:15 +0000 UTC Type:0 Mac:52:54:00:4e:5f:03 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:no-preload-666547 Clientid:01:52:54:00:4e:5f:03}
	I0116 03:43:25.421235  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined IP address 192.168.39.103 and MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:25.421372  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHPort
	I0116 03:43:25.421609  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHKeyPath
	I0116 03:43:25.421782  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHUsername
	I0116 03:43:25.421952  507339 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/no-preload-666547/id_rsa Username:docker}
	I0116 03:43:25.509876  507339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0116 03:43:25.534885  507339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0116 03:43:25.562593  507339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0116 03:43:25.590106  507339 provision.go:86] duration metric: configureAuth took 447.124389ms
	I0116 03:43:25.590145  507339 buildroot.go:189] setting minikube options for container-runtime
	I0116 03:43:25.590386  507339 config.go:182] Loaded profile config "no-preload-666547": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0116 03:43:25.590475  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHHostname
	I0116 03:43:25.593695  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:25.594125  507339 main.go:141] libmachine: (no-preload-666547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:5f:03", ip: ""} in network mk-no-preload-666547: {Iface:virbr2 ExpiryTime:2024-01-16 04:43:15 +0000 UTC Type:0 Mac:52:54:00:4e:5f:03 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:no-preload-666547 Clientid:01:52:54:00:4e:5f:03}
	I0116 03:43:25.594182  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined IP address 192.168.39.103 and MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:25.594407  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHPort
	I0116 03:43:25.594661  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHKeyPath
	I0116 03:43:25.594851  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHKeyPath
	I0116 03:43:25.595124  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHUsername
	I0116 03:43:25.595362  507339 main.go:141] libmachine: Using SSH client type: native
	I0116 03:43:25.595735  507339 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I0116 03:43:25.595753  507339 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 03:43:26.177541  507510 start.go:369] acquired machines lock for "old-k8s-version-696770" in 4m36.503560035s
	I0116 03:43:26.177612  507510 start.go:96] Skipping create...Using existing machine configuration
	I0116 03:43:26.177621  507510 fix.go:54] fixHost starting: 
	I0116 03:43:26.178073  507510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:43:26.178115  507510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:43:26.194930  507510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35119
	I0116 03:43:26.195420  507510 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:43:26.195898  507510 main.go:141] libmachine: Using API Version  1
	I0116 03:43:26.195925  507510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:43:26.196303  507510 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:43:26.196517  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .DriverName
	I0116 03:43:26.196797  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetState
	I0116 03:43:26.198728  507510 fix.go:102] recreateIfNeeded on old-k8s-version-696770: state=Stopped err=<nil>
	I0116 03:43:26.198759  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .DriverName
	W0116 03:43:26.198959  507510 fix.go:128] unexpected machine state, will restart: <nil>
	I0116 03:43:26.201929  507510 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-696770" ...
	I0116 03:43:25.916953  507339 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 03:43:25.916987  507339 machine.go:91] provisioned docker machine in 1.05090319s
	I0116 03:43:25.917013  507339 start.go:300] post-start starting for "no-preload-666547" (driver="kvm2")
	I0116 03:43:25.917045  507339 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 03:43:25.917070  507339 main.go:141] libmachine: (no-preload-666547) Calling .DriverName
	I0116 03:43:25.917472  507339 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 03:43:25.917510  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHHostname
	I0116 03:43:25.920700  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:25.921097  507339 main.go:141] libmachine: (no-preload-666547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:5f:03", ip: ""} in network mk-no-preload-666547: {Iface:virbr2 ExpiryTime:2024-01-16 04:43:15 +0000 UTC Type:0 Mac:52:54:00:4e:5f:03 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:no-preload-666547 Clientid:01:52:54:00:4e:5f:03}
	I0116 03:43:25.921132  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined IP address 192.168.39.103 and MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:25.921386  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHPort
	I0116 03:43:25.921663  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHKeyPath
	I0116 03:43:25.921877  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHUsername
	I0116 03:43:25.922086  507339 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/no-preload-666547/id_rsa Username:docker}
	I0116 03:43:26.011987  507339 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 03:43:26.016777  507339 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 03:43:26.016813  507339 filesync.go:126] Scanning /home/jenkins/minikube-integration/17965-468241/.minikube/addons for local assets ...
	I0116 03:43:26.016889  507339 filesync.go:126] Scanning /home/jenkins/minikube-integration/17965-468241/.minikube/files for local assets ...
	I0116 03:43:26.016985  507339 filesync.go:149] local asset: /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem -> 4754782.pem in /etc/ssl/certs
	I0116 03:43:26.017109  507339 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 03:43:26.027303  507339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem --> /etc/ssl/certs/4754782.pem (1708 bytes)
	I0116 03:43:26.051806  507339 start.go:303] post-start completed in 134.758948ms
	I0116 03:43:26.051849  507339 fix.go:56] fixHost completed within 22.25110408s
	I0116 03:43:26.051881  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHHostname
	I0116 03:43:26.055165  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:26.055568  507339 main.go:141] libmachine: (no-preload-666547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:5f:03", ip: ""} in network mk-no-preload-666547: {Iface:virbr2 ExpiryTime:2024-01-16 04:43:15 +0000 UTC Type:0 Mac:52:54:00:4e:5f:03 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:no-preload-666547 Clientid:01:52:54:00:4e:5f:03}
	I0116 03:43:26.055605  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined IP address 192.168.39.103 and MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:26.055763  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHPort
	I0116 03:43:26.055983  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHKeyPath
	I0116 03:43:26.056222  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHKeyPath
	I0116 03:43:26.056407  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHUsername
	I0116 03:43:26.056579  507339 main.go:141] libmachine: Using SSH client type: native
	I0116 03:43:26.056930  507339 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I0116 03:43:26.056948  507339 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0116 03:43:26.177329  507339 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705376606.122912048
	
	I0116 03:43:26.177360  507339 fix.go:206] guest clock: 1705376606.122912048
	I0116 03:43:26.177367  507339 fix.go:219] Guest: 2024-01-16 03:43:26.122912048 +0000 UTC Remote: 2024-01-16 03:43:26.051855053 +0000 UTC m=+295.486361610 (delta=71.056995ms)
	I0116 03:43:26.177424  507339 fix.go:190] guest clock delta is within tolerance: 71.056995ms
	I0116 03:43:26.177430  507339 start.go:83] releasing machines lock for "no-preload-666547", held for 22.376720152s
	I0116 03:43:26.177461  507339 main.go:141] libmachine: (no-preload-666547) Calling .DriverName
	I0116 03:43:26.177761  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetIP
	I0116 03:43:26.180783  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:26.181087  507339 main.go:141] libmachine: (no-preload-666547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:5f:03", ip: ""} in network mk-no-preload-666547: {Iface:virbr2 ExpiryTime:2024-01-16 04:43:15 +0000 UTC Type:0 Mac:52:54:00:4e:5f:03 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:no-preload-666547 Clientid:01:52:54:00:4e:5f:03}
	I0116 03:43:26.181117  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined IP address 192.168.39.103 and MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:26.181281  507339 main.go:141] libmachine: (no-preload-666547) Calling .DriverName
	I0116 03:43:26.181876  507339 main.go:141] libmachine: (no-preload-666547) Calling .DriverName
	I0116 03:43:26.182068  507339 main.go:141] libmachine: (no-preload-666547) Calling .DriverName
	I0116 03:43:26.182154  507339 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 03:43:26.182203  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHHostname
	I0116 03:43:26.182337  507339 ssh_runner.go:195] Run: cat /version.json
	I0116 03:43:26.182366  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHHostname
	I0116 03:43:26.185253  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:26.185403  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:26.185625  507339 main.go:141] libmachine: (no-preload-666547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:5f:03", ip: ""} in network mk-no-preload-666547: {Iface:virbr2 ExpiryTime:2024-01-16 04:43:15 +0000 UTC Type:0 Mac:52:54:00:4e:5f:03 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:no-preload-666547 Clientid:01:52:54:00:4e:5f:03}
	I0116 03:43:26.185655  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined IP address 192.168.39.103 and MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:26.185807  507339 main.go:141] libmachine: (no-preload-666547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:5f:03", ip: ""} in network mk-no-preload-666547: {Iface:virbr2 ExpiryTime:2024-01-16 04:43:15 +0000 UTC Type:0 Mac:52:54:00:4e:5f:03 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:no-preload-666547 Clientid:01:52:54:00:4e:5f:03}
	I0116 03:43:26.185816  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHPort
	I0116 03:43:26.185855  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined IP address 192.168.39.103 and MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:26.185966  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHPort
	I0116 03:43:26.186041  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHKeyPath
	I0116 03:43:26.186137  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHKeyPath
	I0116 03:43:26.186220  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHUsername
	I0116 03:43:26.186306  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHUsername
	I0116 03:43:26.186383  507339 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/no-preload-666547/id_rsa Username:docker}
	I0116 03:43:26.186428  507339 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/no-preload-666547/id_rsa Username:docker}
	I0116 03:43:26.312441  507339 ssh_runner.go:195] Run: systemctl --version
	I0116 03:43:26.319016  507339 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 03:43:26.469427  507339 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0116 03:43:26.475759  507339 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 03:43:26.475896  507339 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 03:43:26.491920  507339 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0116 03:43:26.491952  507339 start.go:475] detecting cgroup driver to use...
	I0116 03:43:26.492112  507339 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 03:43:26.508122  507339 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 03:43:26.523664  507339 docker.go:217] disabling cri-docker service (if available) ...
	I0116 03:43:26.523754  507339 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 03:43:26.540173  507339 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 03:43:26.557370  507339 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 03:43:26.685134  507339 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 03:43:26.806555  507339 docker.go:233] disabling docker service ...
	I0116 03:43:26.806640  507339 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 03:43:26.821910  507339 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 03:43:26.836619  507339 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 03:43:26.950601  507339 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 03:43:27.077586  507339 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 03:43:27.091892  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 03:43:27.111772  507339 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0116 03:43:27.111856  507339 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:43:27.122183  507339 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 03:43:27.122261  507339 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:43:27.132861  507339 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:43:27.144003  507339 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:43:27.154747  507339 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 03:43:27.166236  507339 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 03:43:27.175337  507339 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0116 03:43:27.175410  507339 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0116 03:43:27.190891  507339 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 03:43:27.201216  507339 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 03:43:27.322701  507339 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 03:43:27.504197  507339 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 03:43:27.504292  507339 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 03:43:27.509879  507339 start.go:543] Will wait 60s for crictl version
	I0116 03:43:27.509972  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:43:27.514555  507339 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 03:43:27.556338  507339 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0116 03:43:27.556444  507339 ssh_runner.go:195] Run: crio --version
	I0116 03:43:27.615814  507339 ssh_runner.go:195] Run: crio --version
	I0116 03:43:27.666262  507339 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I0116 03:43:26.203694  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .Start
	I0116 03:43:26.203950  507510 main.go:141] libmachine: (old-k8s-version-696770) Ensuring networks are active...
	I0116 03:43:26.204831  507510 main.go:141] libmachine: (old-k8s-version-696770) Ensuring network default is active
	I0116 03:43:26.205251  507510 main.go:141] libmachine: (old-k8s-version-696770) Ensuring network mk-old-k8s-version-696770 is active
	I0116 03:43:26.205763  507510 main.go:141] libmachine: (old-k8s-version-696770) Getting domain xml...
	I0116 03:43:26.206485  507510 main.go:141] libmachine: (old-k8s-version-696770) Creating domain...
	I0116 03:43:26.558284  507510 main.go:141] libmachine: (old-k8s-version-696770) Waiting to get IP...
	I0116 03:43:26.559270  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:26.559701  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | unable to find current IP address of domain old-k8s-version-696770 in network mk-old-k8s-version-696770
	I0116 03:43:26.559793  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | I0116 03:43:26.559692  508427 retry.go:31] will retry after 243.799089ms: waiting for machine to come up
	I0116 03:43:26.805411  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:26.805914  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | unable to find current IP address of domain old-k8s-version-696770 in network mk-old-k8s-version-696770
	I0116 03:43:26.805948  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | I0116 03:43:26.805846  508427 retry.go:31] will retry after 346.727587ms: waiting for machine to come up
	I0116 03:43:27.154528  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:27.155074  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | unable to find current IP address of domain old-k8s-version-696770 in network mk-old-k8s-version-696770
	I0116 03:43:27.155107  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | I0116 03:43:27.155023  508427 retry.go:31] will retry after 357.633471ms: waiting for machine to come up
	I0116 03:43:27.514870  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:27.515490  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | unable to find current IP address of domain old-k8s-version-696770 in network mk-old-k8s-version-696770
	I0116 03:43:27.515517  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | I0116 03:43:27.515452  508427 retry.go:31] will retry after 582.001218ms: waiting for machine to come up
	I0116 03:43:28.099271  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:28.099783  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | unable to find current IP address of domain old-k8s-version-696770 in network mk-old-k8s-version-696770
	I0116 03:43:28.099817  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | I0116 03:43:28.099735  508427 retry.go:31] will retry after 747.661188ms: waiting for machine to come up
	I0116 03:43:28.849318  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:28.849836  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | unable to find current IP address of domain old-k8s-version-696770 in network mk-old-k8s-version-696770
	I0116 03:43:28.849872  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | I0116 03:43:28.849799  508427 retry.go:31] will retry after 953.610286ms: waiting for machine to come up
	I0116 03:43:27.667889  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetIP
	I0116 03:43:27.671385  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:27.671804  507339 main.go:141] libmachine: (no-preload-666547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:5f:03", ip: ""} in network mk-no-preload-666547: {Iface:virbr2 ExpiryTime:2024-01-16 04:43:15 +0000 UTC Type:0 Mac:52:54:00:4e:5f:03 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:no-preload-666547 Clientid:01:52:54:00:4e:5f:03}
	I0116 03:43:27.671840  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined IP address 192.168.39.103 and MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:27.672113  507339 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0116 03:43:27.676693  507339 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 03:43:27.690701  507339 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0116 03:43:27.690748  507339 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 03:43:27.731189  507339 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0116 03:43:27.731219  507339 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0116 03:43:27.731321  507339 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:43:27.731358  507339 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0116 03:43:27.731370  507339 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0116 03:43:27.731404  507339 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0116 03:43:27.731441  507339 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0116 03:43:27.731352  507339 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0116 03:43:27.731322  507339 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0116 03:43:27.731322  507339 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0116 03:43:27.733105  507339 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0116 03:43:27.733119  507339 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:43:27.733171  507339 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0116 03:43:27.733171  507339 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0116 03:43:27.733110  507339 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0116 03:43:27.733118  507339 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0116 03:43:27.733113  507339 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0116 03:43:27.733270  507339 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0116 03:43:27.900005  507339 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0116 03:43:27.901232  507339 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0116 03:43:27.903964  507339 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0116 03:43:27.907543  507339 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0116 03:43:27.908417  507339 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0116 03:43:27.909137  507339 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0116 03:43:27.953586  507339 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0116 03:43:28.024252  507339 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0116 03:43:28.024310  507339 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0116 03:43:28.024366  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:43:28.042716  507339 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:43:28.078379  507339 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0116 03:43:28.078438  507339 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0116 03:43:28.078503  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:43:28.179590  507339 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0116 03:43:28.179612  507339 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0116 03:43:28.179661  507339 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0116 03:43:28.179661  507339 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0116 03:43:28.179720  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:43:28.179722  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:43:28.179729  507339 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0116 03:43:28.179750  507339 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0116 03:43:28.179785  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:43:28.179804  507339 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0116 03:43:28.179865  507339 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0116 03:43:28.179906  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:43:28.179812  507339 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0116 03:43:28.179950  507339 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0116 03:43:28.179977  507339 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:43:28.180011  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:43:28.180009  507339 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0116 03:43:28.196999  507339 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0116 03:43:28.197021  507339 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0116 03:43:28.197157  507339 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0116 03:43:28.305002  507339 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0116 03:43:28.305117  507339 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0116 03:43:28.305044  507339 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:43:28.305231  507339 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0116 03:43:28.317016  507339 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0116 03:43:28.317149  507339 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0116 03:43:28.346291  507339 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0116 03:43:28.346393  507339 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0116 03:43:28.346434  507339 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0116 03:43:28.346518  507339 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0116 03:43:28.346547  507339 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0116 03:43:28.346598  507339 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0116 03:43:28.346618  507339 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0116 03:43:28.346631  507339 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0116 03:43:28.346650  507339 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0116 03:43:28.425129  507339 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0116 03:43:28.425217  507339 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0116 03:43:28.425319  507339 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0116 03:43:28.425317  507339 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0116 03:43:28.425377  507339 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0116 03:43:28.425391  507339 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0116 03:43:28.425441  507339 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0116 03:43:29.805277  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:29.805642  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | unable to find current IP address of domain old-k8s-version-696770 in network mk-old-k8s-version-696770
	I0116 03:43:29.805677  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | I0116 03:43:29.805586  508427 retry.go:31] will retry after 734.396993ms: waiting for machine to come up
	I0116 03:43:30.541337  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:30.541794  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | unable to find current IP address of domain old-k8s-version-696770 in network mk-old-k8s-version-696770
	I0116 03:43:30.541828  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | I0116 03:43:30.541741  508427 retry.go:31] will retry after 1.035836118s: waiting for machine to come up
	I0116 03:43:31.579576  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:31.580093  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | unable to find current IP address of domain old-k8s-version-696770 in network mk-old-k8s-version-696770
	I0116 03:43:31.580118  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | I0116 03:43:31.580070  508427 retry.go:31] will retry after 1.723172168s: waiting for machine to come up
	I0116 03:43:33.305247  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:33.305726  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | unable to find current IP address of domain old-k8s-version-696770 in network mk-old-k8s-version-696770
	I0116 03:43:33.305759  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | I0116 03:43:33.305669  508427 retry.go:31] will retry after 1.465747661s: waiting for machine to come up
	I0116 03:43:32.396858  507339 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (4.050189724s)
	I0116 03:43:32.396913  507339 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0116 03:43:32.396956  507339 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (3.971489155s)
	I0116 03:43:32.397006  507339 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0116 03:43:32.397028  507339 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (3.971686012s)
	I0116 03:43:32.397043  507339 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0116 03:43:32.397050  507339 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (4.050383438s)
	I0116 03:43:32.397063  507339 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0116 03:43:32.397093  507339 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0116 03:43:32.397172  507339 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0116 03:43:35.381615  507339 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.98440652s)
	I0116 03:43:35.381660  507339 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0116 03:43:35.381699  507339 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0116 03:43:35.381759  507339 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0116 03:43:34.773552  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:34.774149  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | unable to find current IP address of domain old-k8s-version-696770 in network mk-old-k8s-version-696770
	I0116 03:43:34.774182  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | I0116 03:43:34.774084  508427 retry.go:31] will retry after 1.94747868s: waiting for machine to come up
	I0116 03:43:36.722855  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:36.723416  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | unable to find current IP address of domain old-k8s-version-696770 in network mk-old-k8s-version-696770
	I0116 03:43:36.723448  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | I0116 03:43:36.723365  508427 retry.go:31] will retry after 2.550966562s: waiting for machine to come up
	I0116 03:43:39.276082  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:39.276671  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | unable to find current IP address of domain old-k8s-version-696770 in network mk-old-k8s-version-696770
	I0116 03:43:39.276710  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | I0116 03:43:39.276608  508427 retry.go:31] will retry after 3.317854993s: waiting for machine to come up
	I0116 03:43:38.162725  507339 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.780935577s)
	I0116 03:43:38.162760  507339 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0116 03:43:38.162792  507339 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0116 03:43:38.162843  507339 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0116 03:43:39.527575  507339 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.36469752s)
	I0116 03:43:39.527612  507339 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0116 03:43:39.527639  507339 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0116 03:43:39.527696  507339 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0116 03:43:42.595994  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:42.596424  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | unable to find current IP address of domain old-k8s-version-696770 in network mk-old-k8s-version-696770
	I0116 03:43:42.596458  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | I0116 03:43:42.596364  508427 retry.go:31] will retry after 4.913808783s: waiting for machine to come up
	I0116 03:43:41.690968  507339 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.16323953s)
	I0116 03:43:41.691007  507339 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0116 03:43:41.691045  507339 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0116 03:43:41.691100  507339 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0116 03:43:43.849988  507339 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.158855886s)
	I0116 03:43:43.850023  507339 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0116 03:43:43.850052  507339 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0116 03:43:43.850107  507339 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0116 03:43:44.597660  507339 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0116 03:43:44.597710  507339 cache_images.go:123] Successfully loaded all cached images
	I0116 03:43:44.597715  507339 cache_images.go:92] LoadImages completed in 16.866481277s
	I0116 03:43:44.597788  507339 ssh_runner.go:195] Run: crio config
	I0116 03:43:44.658055  507339 cni.go:84] Creating CNI manager for ""
	I0116 03:43:44.658081  507339 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:43:44.658104  507339 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 03:43:44.658124  507339 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.103 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-666547 NodeName:no-preload-666547 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.103"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.103 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0116 03:43:44.658290  507339 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.103
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-666547"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.103
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.103"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 03:43:44.658371  507339 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-666547 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.103
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-666547 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0116 03:43:44.658431  507339 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0116 03:43:44.668859  507339 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 03:43:44.668934  507339 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0116 03:43:44.678543  507339 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I0116 03:43:44.694998  507339 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0116 03:43:44.711256  507339 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I0116 03:43:44.728203  507339 ssh_runner.go:195] Run: grep 192.168.39.103	control-plane.minikube.internal$ /etc/hosts
	I0116 03:43:44.732219  507339 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.103	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 03:43:44.744687  507339 certs.go:56] Setting up /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/no-preload-666547 for IP: 192.168.39.103
	I0116 03:43:44.744730  507339 certs.go:190] acquiring lock for shared ca certs: {Name:mkab5aeea3e0ed69f3592dc8f418e07c6075b130 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:43:44.744957  507339 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17965-468241/.minikube/ca.key
	I0116 03:43:44.745014  507339 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.key
	I0116 03:43:44.745133  507339 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/no-preload-666547/client.key
	I0116 03:43:44.745226  507339 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/no-preload-666547/apiserver.key.f0189397
	I0116 03:43:44.745293  507339 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/no-preload-666547/proxy-client.key
	I0116 03:43:44.745431  507339 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478.pem (1338 bytes)
	W0116 03:43:44.745471  507339 certs.go:433] ignoring /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478_empty.pem, impossibly tiny 0 bytes
	I0116 03:43:44.745488  507339 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca-key.pem (1679 bytes)
	I0116 03:43:44.745541  507339 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem (1078 bytes)
	I0116 03:43:44.745582  507339 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem (1123 bytes)
	I0116 03:43:44.745620  507339 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem (1679 bytes)
	I0116 03:43:44.745687  507339 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem (1708 bytes)
	I0116 03:43:44.746558  507339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/no-preload-666547/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0116 03:43:44.770889  507339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/no-preload-666547/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0116 03:43:44.795150  507339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/no-preload-666547/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0116 03:43:44.818047  507339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/no-preload-666547/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0116 03:43:44.842003  507339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 03:43:44.866125  507339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0116 03:43:44.890235  507339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 03:43:44.913732  507339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 03:43:44.937249  507339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478.pem --> /usr/share/ca-certificates/475478.pem (1338 bytes)
	I0116 03:43:44.961628  507339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem --> /usr/share/ca-certificates/4754782.pem (1708 bytes)
	I0116 03:43:44.986672  507339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 03:43:45.010735  507339 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0116 03:43:45.028537  507339 ssh_runner.go:195] Run: openssl version
	I0116 03:43:45.034910  507339 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/475478.pem && ln -fs /usr/share/ca-certificates/475478.pem /etc/ssl/certs/475478.pem"
	I0116 03:43:45.046034  507339 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/475478.pem
	I0116 03:43:45.050965  507339 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 02:44 /usr/share/ca-certificates/475478.pem
	I0116 03:43:45.051059  507339 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/475478.pem
	I0116 03:43:45.057465  507339 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/475478.pem /etc/ssl/certs/51391683.0"
	I0116 03:43:45.068400  507339 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4754782.pem && ln -fs /usr/share/ca-certificates/4754782.pem /etc/ssl/certs/4754782.pem"
	I0116 03:43:45.079619  507339 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4754782.pem
	I0116 03:43:45.084545  507339 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 02:44 /usr/share/ca-certificates/4754782.pem
	I0116 03:43:45.084622  507339 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4754782.pem
	I0116 03:43:45.090638  507339 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4754782.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 03:43:45.101658  507339 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 03:43:45.113091  507339 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:43:45.118085  507339 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 02:35 /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:43:45.118153  507339 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:43:45.124100  507339 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 03:43:45.135338  507339 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 03:43:45.140230  507339 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0116 03:43:45.146566  507339 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0116 03:43:45.152839  507339 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0116 03:43:45.158917  507339 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0116 03:43:45.164984  507339 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0116 03:43:45.171049  507339 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0116 03:43:45.177547  507339 kubeadm.go:404] StartCluster: {Name:no-preload-666547 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:no-preload-666547 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.103 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 03:43:45.177657  507339 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0116 03:43:45.177719  507339 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 03:43:45.221757  507339 cri.go:89] found id: ""
	I0116 03:43:45.221848  507339 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0116 03:43:45.233811  507339 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0116 03:43:45.233838  507339 kubeadm.go:636] restartCluster start
	I0116 03:43:45.233906  507339 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0116 03:43:45.244810  507339 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:45.245999  507339 kubeconfig.go:92] found "no-preload-666547" server: "https://192.168.39.103:8443"
	I0116 03:43:45.248711  507339 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0116 03:43:45.260979  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:45.261066  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:45.276682  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:48.709239  507889 start.go:369] acquired machines lock for "default-k8s-diff-port-434445" in 3m31.985691976s
	I0116 03:43:48.709311  507889 start.go:96] Skipping create...Using existing machine configuration
	I0116 03:43:48.709333  507889 fix.go:54] fixHost starting: 
	I0116 03:43:48.709815  507889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:43:48.709867  507889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:43:48.726637  507889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45373
	I0116 03:43:48.727122  507889 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:43:48.727702  507889 main.go:141] libmachine: Using API Version  1
	I0116 03:43:48.727737  507889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:43:48.728104  507889 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:43:48.728310  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .DriverName
	I0116 03:43:48.728475  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetState
	I0116 03:43:48.730338  507889 fix.go:102] recreateIfNeeded on default-k8s-diff-port-434445: state=Stopped err=<nil>
	I0116 03:43:48.730361  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .DriverName
	W0116 03:43:48.730545  507889 fix.go:128] unexpected machine state, will restart: <nil>
	I0116 03:43:48.733848  507889 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-434445" ...
	I0116 03:43:47.512288  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:47.512755  507510 main.go:141] libmachine: (old-k8s-version-696770) Found IP for machine: 192.168.61.167
	I0116 03:43:47.512793  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has current primary IP address 192.168.61.167 and MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:47.512804  507510 main.go:141] libmachine: (old-k8s-version-696770) Reserving static IP address...
	I0116 03:43:47.513157  507510 main.go:141] libmachine: (old-k8s-version-696770) Reserved static IP address: 192.168.61.167
	I0116 03:43:47.513194  507510 main.go:141] libmachine: (old-k8s-version-696770) Waiting for SSH to be available...
	I0116 03:43:47.513218  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | found host DHCP lease matching {name: "old-k8s-version-696770", mac: "52:54:00:37:20:1a", ip: "192.168.61.167"} in network mk-old-k8s-version-696770: {Iface:virbr3 ExpiryTime:2024-01-16 04:43:38 +0000 UTC Type:0 Mac:52:54:00:37:20:1a Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:old-k8s-version-696770 Clientid:01:52:54:00:37:20:1a}
	I0116 03:43:47.513242  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | skip adding static IP to network mk-old-k8s-version-696770 - found existing host DHCP lease matching {name: "old-k8s-version-696770", mac: "52:54:00:37:20:1a", ip: "192.168.61.167"}
	I0116 03:43:47.513259  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | Getting to WaitForSSH function...
	I0116 03:43:47.515438  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:47.515887  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:20:1a", ip: ""} in network mk-old-k8s-version-696770: {Iface:virbr3 ExpiryTime:2024-01-16 04:43:38 +0000 UTC Type:0 Mac:52:54:00:37:20:1a Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:old-k8s-version-696770 Clientid:01:52:54:00:37:20:1a}
	I0116 03:43:47.515928  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined IP address 192.168.61.167 and MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:47.516089  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | Using SSH client type: external
	I0116 03:43:47.516124  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | Using SSH private key: /home/jenkins/minikube-integration/17965-468241/.minikube/machines/old-k8s-version-696770/id_rsa (-rw-------)
	I0116 03:43:47.516160  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.167 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17965-468241/.minikube/machines/old-k8s-version-696770/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0116 03:43:47.516182  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | About to run SSH command:
	I0116 03:43:47.516203  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | exit 0
	I0116 03:43:47.608193  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | SSH cmd err, output: <nil>: 
	I0116 03:43:47.608599  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetConfigRaw
	I0116 03:43:47.609195  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetIP
	I0116 03:43:47.611633  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:47.612018  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:20:1a", ip: ""} in network mk-old-k8s-version-696770: {Iface:virbr3 ExpiryTime:2024-01-16 04:43:38 +0000 UTC Type:0 Mac:52:54:00:37:20:1a Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:old-k8s-version-696770 Clientid:01:52:54:00:37:20:1a}
	I0116 03:43:47.612068  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined IP address 192.168.61.167 and MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:47.612355  507510 profile.go:148] Saving config to /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/old-k8s-version-696770/config.json ...
	I0116 03:43:47.612601  507510 machine.go:88] provisioning docker machine ...
	I0116 03:43:47.612628  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .DriverName
	I0116 03:43:47.612872  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetMachineName
	I0116 03:43:47.613047  507510 buildroot.go:166] provisioning hostname "old-k8s-version-696770"
	I0116 03:43:47.613068  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetMachineName
	I0116 03:43:47.613195  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHHostname
	I0116 03:43:47.615457  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:47.615901  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:20:1a", ip: ""} in network mk-old-k8s-version-696770: {Iface:virbr3 ExpiryTime:2024-01-16 04:43:38 +0000 UTC Type:0 Mac:52:54:00:37:20:1a Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:old-k8s-version-696770 Clientid:01:52:54:00:37:20:1a}
	I0116 03:43:47.615928  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined IP address 192.168.61.167 and MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:47.616111  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHPort
	I0116 03:43:47.616292  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHKeyPath
	I0116 03:43:47.616489  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHKeyPath
	I0116 03:43:47.616687  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHUsername
	I0116 03:43:47.616889  507510 main.go:141] libmachine: Using SSH client type: native
	I0116 03:43:47.617280  507510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.167 22 <nil> <nil>}
	I0116 03:43:47.617297  507510 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-696770 && echo "old-k8s-version-696770" | sudo tee /etc/hostname
	I0116 03:43:47.745448  507510 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-696770
	
	I0116 03:43:47.745482  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHHostname
	I0116 03:43:47.748812  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:47.749135  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:20:1a", ip: ""} in network mk-old-k8s-version-696770: {Iface:virbr3 ExpiryTime:2024-01-16 04:43:38 +0000 UTC Type:0 Mac:52:54:00:37:20:1a Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:old-k8s-version-696770 Clientid:01:52:54:00:37:20:1a}
	I0116 03:43:47.749171  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined IP address 192.168.61.167 and MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:47.749296  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHPort
	I0116 03:43:47.749525  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHKeyPath
	I0116 03:43:47.749715  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHKeyPath
	I0116 03:43:47.749872  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHUsername
	I0116 03:43:47.750019  507510 main.go:141] libmachine: Using SSH client type: native
	I0116 03:43:47.750339  507510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.167 22 <nil> <nil>}
	I0116 03:43:47.750357  507510 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-696770' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-696770/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-696770' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 03:43:47.876917  507510 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 03:43:47.876957  507510 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17965-468241/.minikube CaCertPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17965-468241/.minikube}
	I0116 03:43:47.877011  507510 buildroot.go:174] setting up certificates
	I0116 03:43:47.877026  507510 provision.go:83] configureAuth start
	I0116 03:43:47.877041  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetMachineName
	I0116 03:43:47.877378  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetIP
	I0116 03:43:47.880453  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:47.880836  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:20:1a", ip: ""} in network mk-old-k8s-version-696770: {Iface:virbr3 ExpiryTime:2024-01-16 04:43:38 +0000 UTC Type:0 Mac:52:54:00:37:20:1a Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:old-k8s-version-696770 Clientid:01:52:54:00:37:20:1a}
	I0116 03:43:47.880869  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined IP address 192.168.61.167 and MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:47.881010  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHHostname
	I0116 03:43:47.883053  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:47.883415  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:20:1a", ip: ""} in network mk-old-k8s-version-696770: {Iface:virbr3 ExpiryTime:2024-01-16 04:43:38 +0000 UTC Type:0 Mac:52:54:00:37:20:1a Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:old-k8s-version-696770 Clientid:01:52:54:00:37:20:1a}
	I0116 03:43:47.883448  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined IP address 192.168.61.167 and MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:47.883635  507510 provision.go:138] copyHostCerts
	I0116 03:43:47.883706  507510 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem, removing ...
	I0116 03:43:47.883717  507510 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem
	I0116 03:43:47.883778  507510 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem (1078 bytes)
	I0116 03:43:47.883864  507510 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem, removing ...
	I0116 03:43:47.883871  507510 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem
	I0116 03:43:47.883893  507510 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem (1123 bytes)
	I0116 03:43:47.883943  507510 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem, removing ...
	I0116 03:43:47.883950  507510 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem
	I0116 03:43:47.883965  507510 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem (1679 bytes)
	I0116 03:43:47.884010  507510 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-696770 san=[192.168.61.167 192.168.61.167 localhost 127.0.0.1 minikube old-k8s-version-696770]
	I0116 03:43:47.946258  507510 provision.go:172] copyRemoteCerts
	I0116 03:43:47.946327  507510 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 03:43:47.946354  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHHostname
	I0116 03:43:47.949417  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:47.949750  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:20:1a", ip: ""} in network mk-old-k8s-version-696770: {Iface:virbr3 ExpiryTime:2024-01-16 04:43:38 +0000 UTC Type:0 Mac:52:54:00:37:20:1a Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:old-k8s-version-696770 Clientid:01:52:54:00:37:20:1a}
	I0116 03:43:47.949784  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined IP address 192.168.61.167 and MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:47.949941  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHPort
	I0116 03:43:47.950180  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHKeyPath
	I0116 03:43:47.950333  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHUsername
	I0116 03:43:47.950478  507510 sshutil.go:53] new ssh client: &{IP:192.168.61.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/old-k8s-version-696770/id_rsa Username:docker}
	I0116 03:43:48.042564  507510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0116 03:43:48.066519  507510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0116 03:43:48.090127  507510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0116 03:43:48.113387  507510 provision.go:86] duration metric: configureAuth took 236.343393ms
	I0116 03:43:48.113428  507510 buildroot.go:189] setting minikube options for container-runtime
	I0116 03:43:48.113662  507510 config.go:182] Loaded profile config "old-k8s-version-696770": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0116 03:43:48.113758  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHHostname
	I0116 03:43:48.116735  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:48.117144  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:20:1a", ip: ""} in network mk-old-k8s-version-696770: {Iface:virbr3 ExpiryTime:2024-01-16 04:43:38 +0000 UTC Type:0 Mac:52:54:00:37:20:1a Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:old-k8s-version-696770 Clientid:01:52:54:00:37:20:1a}
	I0116 03:43:48.117187  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined IP address 192.168.61.167 and MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:48.117328  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHPort
	I0116 03:43:48.117529  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHKeyPath
	I0116 03:43:48.117725  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHKeyPath
	I0116 03:43:48.117892  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHUsername
	I0116 03:43:48.118118  507510 main.go:141] libmachine: Using SSH client type: native
	I0116 03:43:48.118427  507510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.167 22 <nil> <nil>}
	I0116 03:43:48.118450  507510 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 03:43:48.458094  507510 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 03:43:48.458129  507510 machine.go:91] provisioned docker machine in 845.51167ms
	I0116 03:43:48.458141  507510 start.go:300] post-start starting for "old-k8s-version-696770" (driver="kvm2")
	I0116 03:43:48.458153  507510 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 03:43:48.458172  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .DriverName
	I0116 03:43:48.458616  507510 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 03:43:48.458650  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHHostname
	I0116 03:43:48.461476  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:48.461858  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:20:1a", ip: ""} in network mk-old-k8s-version-696770: {Iface:virbr3 ExpiryTime:2024-01-16 04:43:38 +0000 UTC Type:0 Mac:52:54:00:37:20:1a Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:old-k8s-version-696770 Clientid:01:52:54:00:37:20:1a}
	I0116 03:43:48.461908  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined IP address 192.168.61.167 and MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:48.462029  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHPort
	I0116 03:43:48.462272  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHKeyPath
	I0116 03:43:48.462460  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHUsername
	I0116 03:43:48.462643  507510 sshutil.go:53] new ssh client: &{IP:192.168.61.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/old-k8s-version-696770/id_rsa Username:docker}
	I0116 03:43:48.550436  507510 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 03:43:48.555225  507510 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 03:43:48.555261  507510 filesync.go:126] Scanning /home/jenkins/minikube-integration/17965-468241/.minikube/addons for local assets ...
	I0116 03:43:48.555349  507510 filesync.go:126] Scanning /home/jenkins/minikube-integration/17965-468241/.minikube/files for local assets ...
	I0116 03:43:48.555434  507510 filesync.go:149] local asset: /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem -> 4754782.pem in /etc/ssl/certs
	I0116 03:43:48.555560  507510 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 03:43:48.565598  507510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem --> /etc/ssl/certs/4754782.pem (1708 bytes)
	I0116 03:43:48.588611  507510 start.go:303] post-start completed in 130.45305ms
	I0116 03:43:48.588642  507510 fix.go:56] fixHost completed within 22.411021213s
	I0116 03:43:48.588675  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHHostname
	I0116 03:43:48.591220  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:48.591582  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:20:1a", ip: ""} in network mk-old-k8s-version-696770: {Iface:virbr3 ExpiryTime:2024-01-16 04:43:38 +0000 UTC Type:0 Mac:52:54:00:37:20:1a Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:old-k8s-version-696770 Clientid:01:52:54:00:37:20:1a}
	I0116 03:43:48.591618  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined IP address 192.168.61.167 and MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:48.591779  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHPort
	I0116 03:43:48.592014  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHKeyPath
	I0116 03:43:48.592216  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHKeyPath
	I0116 03:43:48.592412  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHUsername
	I0116 03:43:48.592567  507510 main.go:141] libmachine: Using SSH client type: native
	I0116 03:43:48.592933  507510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.167 22 <nil> <nil>}
	I0116 03:43:48.592950  507510 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0116 03:43:48.709079  507510 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705376628.651647278
	
	I0116 03:43:48.709103  507510 fix.go:206] guest clock: 1705376628.651647278
	I0116 03:43:48.709111  507510 fix.go:219] Guest: 2024-01-16 03:43:48.651647278 +0000 UTC Remote: 2024-01-16 03:43:48.588648172 +0000 UTC m=+299.078902394 (delta=62.999106ms)
	I0116 03:43:48.709134  507510 fix.go:190] guest clock delta is within tolerance: 62.999106ms
	I0116 03:43:48.709140  507510 start.go:83] releasing machines lock for "old-k8s-version-696770", held for 22.531556099s
	I0116 03:43:48.709169  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .DriverName
	I0116 03:43:48.709519  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetIP
	I0116 03:43:48.712438  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:48.712770  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:20:1a", ip: ""} in network mk-old-k8s-version-696770: {Iface:virbr3 ExpiryTime:2024-01-16 04:43:38 +0000 UTC Type:0 Mac:52:54:00:37:20:1a Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:old-k8s-version-696770 Clientid:01:52:54:00:37:20:1a}
	I0116 03:43:48.712825  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined IP address 192.168.61.167 and MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:48.712921  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .DriverName
	I0116 03:43:48.713501  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .DriverName
	I0116 03:43:48.713677  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .DriverName
	I0116 03:43:48.713768  507510 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 03:43:48.713816  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHHostname
	I0116 03:43:48.713920  507510 ssh_runner.go:195] Run: cat /version.json
	I0116 03:43:48.713951  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHHostname
	I0116 03:43:48.716415  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:48.716697  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:48.716820  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:20:1a", ip: ""} in network mk-old-k8s-version-696770: {Iface:virbr3 ExpiryTime:2024-01-16 04:43:38 +0000 UTC Type:0 Mac:52:54:00:37:20:1a Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:old-k8s-version-696770 Clientid:01:52:54:00:37:20:1a}
	I0116 03:43:48.716846  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined IP address 192.168.61.167 and MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:48.716995  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHPort
	I0116 03:43:48.717093  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:20:1a", ip: ""} in network mk-old-k8s-version-696770: {Iface:virbr3 ExpiryTime:2024-01-16 04:43:38 +0000 UTC Type:0 Mac:52:54:00:37:20:1a Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:old-k8s-version-696770 Clientid:01:52:54:00:37:20:1a}
	I0116 03:43:48.717123  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined IP address 192.168.61.167 and MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:48.717394  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHPort
	I0116 03:43:48.717402  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHKeyPath
	I0116 03:43:48.717638  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHKeyPath
	I0116 03:43:48.717650  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHUsername
	I0116 03:43:48.717791  507510 sshutil.go:53] new ssh client: &{IP:192.168.61.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/old-k8s-version-696770/id_rsa Username:docker}
	I0116 03:43:48.717824  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHUsername
	I0116 03:43:48.717956  507510 sshutil.go:53] new ssh client: &{IP:192.168.61.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/old-k8s-version-696770/id_rsa Username:docker}
	I0116 03:43:48.838506  507510 ssh_runner.go:195] Run: systemctl --version
	I0116 03:43:48.845152  507510 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 03:43:49.001791  507510 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0116 03:43:49.008474  507510 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 03:43:49.008558  507510 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 03:43:49.024030  507510 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0116 03:43:49.024087  507510 start.go:475] detecting cgroup driver to use...
	I0116 03:43:49.024164  507510 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 03:43:49.038853  507510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 03:43:49.056228  507510 docker.go:217] disabling cri-docker service (if available) ...
	I0116 03:43:49.056308  507510 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 03:43:49.071266  507510 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 03:43:49.085793  507510 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 03:43:49.211294  507510 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 03:43:49.338893  507510 docker.go:233] disabling docker service ...
	I0116 03:43:49.338971  507510 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 03:43:49.354423  507510 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 03:43:49.367355  507510 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 03:43:49.483277  507510 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 03:43:49.593977  507510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 03:43:49.607374  507510 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 03:43:49.626781  507510 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0116 03:43:49.626846  507510 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:43:49.637809  507510 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 03:43:49.637892  507510 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:43:49.648162  507510 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:43:49.658305  507510 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:43:49.669557  507510 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 03:43:49.680190  507510 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 03:43:49.689125  507510 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0116 03:43:49.689199  507510 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0116 03:43:49.703247  507510 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 03:43:49.713826  507510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 03:43:49.829677  507510 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 03:43:50.009393  507510 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 03:43:50.009489  507510 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 03:43:50.016443  507510 start.go:543] Will wait 60s for crictl version
	I0116 03:43:50.016521  507510 ssh_runner.go:195] Run: which crictl
	I0116 03:43:50.020560  507510 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 03:43:50.056652  507510 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0116 03:43:50.056733  507510 ssh_runner.go:195] Run: crio --version
	I0116 03:43:50.104202  507510 ssh_runner.go:195] Run: crio --version
	I0116 03:43:50.150215  507510 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0116 03:43:45.761989  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:45.762077  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:45.776377  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:46.262107  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:46.262205  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:46.274748  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:46.761344  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:46.761434  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:46.773509  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:47.261093  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:47.261222  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:47.272584  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:47.761119  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:47.761204  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:47.773674  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:48.261288  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:48.261448  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:48.273461  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:48.762071  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:48.762205  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:48.778093  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:49.261032  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:49.261139  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:49.273090  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:49.761233  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:49.761348  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:49.773529  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:50.261720  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:50.261822  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:50.277403  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:48.735627  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .Start
	I0116 03:43:48.735865  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Ensuring networks are active...
	I0116 03:43:48.736708  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Ensuring network default is active
	I0116 03:43:48.737105  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Ensuring network mk-default-k8s-diff-port-434445 is active
	I0116 03:43:48.737445  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Getting domain xml...
	I0116 03:43:48.738086  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Creating domain...
	I0116 03:43:49.085479  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting to get IP...
	I0116 03:43:49.086513  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:43:49.086907  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | unable to find current IP address of domain default-k8s-diff-port-434445 in network mk-default-k8s-diff-port-434445
	I0116 03:43:49.086993  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | I0116 03:43:49.086879  508579 retry.go:31] will retry after 251.682416ms: waiting for machine to come up
	I0116 03:43:49.340560  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:43:49.341196  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | unable to find current IP address of domain default-k8s-diff-port-434445 in network mk-default-k8s-diff-port-434445
	I0116 03:43:49.341235  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | I0116 03:43:49.341140  508579 retry.go:31] will retry after 288.322607ms: waiting for machine to come up
	I0116 03:43:49.630920  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:43:49.631449  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | unable to find current IP address of domain default-k8s-diff-port-434445 in network mk-default-k8s-diff-port-434445
	I0116 03:43:49.631478  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | I0116 03:43:49.631404  508579 retry.go:31] will retry after 305.730946ms: waiting for machine to come up
	I0116 03:43:49.938846  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:43:49.939357  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | unable to find current IP address of domain default-k8s-diff-port-434445 in network mk-default-k8s-diff-port-434445
	I0116 03:43:49.939381  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | I0116 03:43:49.939307  508579 retry.go:31] will retry after 431.952943ms: waiting for machine to come up
	I0116 03:43:50.372921  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:43:50.373426  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | unable to find current IP address of domain default-k8s-diff-port-434445 in network mk-default-k8s-diff-port-434445
	I0116 03:43:50.373453  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | I0116 03:43:50.373368  508579 retry.go:31] will retry after 557.336026ms: waiting for machine to come up
	I0116 03:43:50.932300  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:43:50.932902  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | unable to find current IP address of domain default-k8s-diff-port-434445 in network mk-default-k8s-diff-port-434445
	I0116 03:43:50.932933  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | I0116 03:43:50.932837  508579 retry.go:31] will retry after 652.034162ms: waiting for machine to come up
	I0116 03:43:51.586765  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:43:51.587332  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | unable to find current IP address of domain default-k8s-diff-port-434445 in network mk-default-k8s-diff-port-434445
	I0116 03:43:51.587365  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | I0116 03:43:51.587290  508579 retry.go:31] will retry after 1.078418867s: waiting for machine to come up
	I0116 03:43:50.151763  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetIP
	I0116 03:43:50.154861  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:50.155283  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:20:1a", ip: ""} in network mk-old-k8s-version-696770: {Iface:virbr3 ExpiryTime:2024-01-16 04:43:38 +0000 UTC Type:0 Mac:52:54:00:37:20:1a Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:old-k8s-version-696770 Clientid:01:52:54:00:37:20:1a}
	I0116 03:43:50.155331  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined IP address 192.168.61.167 and MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:50.155536  507510 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0116 03:43:50.160159  507510 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 03:43:50.173354  507510 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0116 03:43:50.173416  507510 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 03:43:50.227220  507510 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0116 03:43:50.227308  507510 ssh_runner.go:195] Run: which lz4
	I0116 03:43:50.231565  507510 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0116 03:43:50.236133  507510 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0116 03:43:50.236169  507510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0116 03:43:52.243584  507510 crio.go:444] Took 2.012049 seconds to copy over tarball
	I0116 03:43:52.243686  507510 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0116 03:43:50.761232  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:50.761323  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:50.777877  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:51.261357  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:51.261444  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:51.280624  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:51.761117  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:51.761225  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:51.775076  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:52.261857  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:52.261948  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:52.279844  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:52.761400  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:52.761493  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:52.773869  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:53.261155  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:53.261263  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:53.273774  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:53.761370  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:53.761500  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:53.773900  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:54.262012  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:54.262134  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:54.277928  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:54.761492  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:54.761642  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:54.774531  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:55.261302  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:55.261395  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:55.274178  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:55.274226  507339 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0116 03:43:55.274272  507339 kubeadm.go:1135] stopping kube-system containers ...
	I0116 03:43:55.274293  507339 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0116 03:43:55.274360  507339 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 03:43:55.321847  507339 cri.go:89] found id: ""
	I0116 03:43:55.321943  507339 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0116 03:43:55.339190  507339 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 03:43:55.348548  507339 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 03:43:55.348637  507339 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 03:43:55.358316  507339 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0116 03:43:55.358345  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:43:55.492932  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:43:52.667882  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:43:52.668380  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | unable to find current IP address of domain default-k8s-diff-port-434445 in network mk-default-k8s-diff-port-434445
	I0116 03:43:52.668415  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | I0116 03:43:52.668311  508579 retry.go:31] will retry after 1.052441827s: waiting for machine to come up
	I0116 03:43:53.722859  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:43:53.723473  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | unable to find current IP address of domain default-k8s-diff-port-434445 in network mk-default-k8s-diff-port-434445
	I0116 03:43:53.723503  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | I0116 03:43:53.723429  508579 retry.go:31] will retry after 1.233090848s: waiting for machine to come up
	I0116 03:43:54.958519  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:43:54.958990  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | unable to find current IP address of domain default-k8s-diff-port-434445 in network mk-default-k8s-diff-port-434445
	I0116 03:43:54.959014  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | I0116 03:43:54.958934  508579 retry.go:31] will retry after 2.038449182s: waiting for machine to come up
	I0116 03:43:55.109598  507510 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.865872133s)
	I0116 03:43:55.109637  507510 crio.go:451] Took 2.866019 seconds to extract the tarball
	I0116 03:43:55.109652  507510 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0116 03:43:55.150763  507510 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 03:43:55.206497  507510 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0116 03:43:55.206525  507510 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0116 03:43:55.206597  507510 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:43:55.206619  507510 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0116 03:43:55.206660  507510 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0116 03:43:55.206682  507510 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0116 03:43:55.206601  507510 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0116 03:43:55.206622  507510 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0116 03:43:55.206790  507510 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0116 03:43:55.206801  507510 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0116 03:43:55.208228  507510 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:43:55.208246  507510 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0116 03:43:55.208245  507510 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0116 03:43:55.208247  507510 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0116 03:43:55.208291  507510 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0116 03:43:55.208295  507510 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0116 03:43:55.208291  507510 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0116 03:43:55.208610  507510 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0116 03:43:55.364082  507510 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0116 03:43:55.364096  507510 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0116 03:43:55.367820  507510 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0116 03:43:55.371639  507510 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0116 03:43:55.379423  507510 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0116 03:43:55.383569  507510 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0116 03:43:55.385854  507510 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0116 03:43:55.522241  507510 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:43:55.539971  507510 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0116 03:43:55.540031  507510 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0116 03:43:55.540113  507510 ssh_runner.go:195] Run: which crictl
	I0116 03:43:55.542332  507510 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0116 03:43:55.542389  507510 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0116 03:43:55.542441  507510 ssh_runner.go:195] Run: which crictl
	I0116 03:43:55.565552  507510 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0116 03:43:55.565679  507510 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0116 03:43:55.565761  507510 ssh_runner.go:195] Run: which crictl
	I0116 03:43:55.583839  507510 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0116 03:43:55.583890  507510 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0116 03:43:55.583942  507510 ssh_runner.go:195] Run: which crictl
	I0116 03:43:55.583847  507510 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0116 03:43:55.584023  507510 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0116 03:43:55.584073  507510 ssh_runner.go:195] Run: which crictl
	I0116 03:43:55.596487  507510 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0116 03:43:55.596555  507510 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0116 03:43:55.596619  507510 ssh_runner.go:195] Run: which crictl
	I0116 03:43:55.605042  507510 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0116 03:43:55.605105  507510 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0116 03:43:55.605162  507510 ssh_runner.go:195] Run: which crictl
	I0116 03:43:55.740186  507510 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0116 03:43:55.740225  507510 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0116 03:43:55.740283  507510 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0116 03:43:55.740334  507510 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0116 03:43:55.740384  507510 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0116 03:43:55.740441  507510 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0116 03:43:55.740450  507510 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0116 03:43:55.900542  507510 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0116 03:43:55.906506  507510 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0116 03:43:55.914158  507510 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0116 03:43:55.914171  507510 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0116 03:43:55.926953  507510 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0116 03:43:55.927034  507510 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0116 03:43:55.927137  507510 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0116 03:43:55.927186  507510 cache_images.go:92] LoadImages completed in 720.646435ms
	W0116 03:43:55.927280  507510 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0: no such file or directory
	I0116 03:43:55.927362  507510 ssh_runner.go:195] Run: crio config
	I0116 03:43:55.989408  507510 cni.go:84] Creating CNI manager for ""
	I0116 03:43:55.989440  507510 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:43:55.989468  507510 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 03:43:55.989495  507510 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.167 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-696770 NodeName:old-k8s-version-696770 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.167"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.167 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0116 03:43:55.989657  507510 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.167
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-696770"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.167
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.167"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-696770
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.61.167:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 03:43:55.989757  507510 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-696770 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.167
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-696770 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0116 03:43:55.989819  507510 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0116 03:43:55.999676  507510 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 03:43:55.999766  507510 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0116 03:43:56.009179  507510 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0116 03:43:56.028479  507510 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0116 03:43:56.045979  507510 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2181 bytes)
	I0116 03:43:56.067179  507510 ssh_runner.go:195] Run: grep 192.168.61.167	control-plane.minikube.internal$ /etc/hosts
	I0116 03:43:56.071532  507510 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.167	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 03:43:56.085960  507510 certs.go:56] Setting up /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/old-k8s-version-696770 for IP: 192.168.61.167
	I0116 03:43:56.086006  507510 certs.go:190] acquiring lock for shared ca certs: {Name:mkab5aeea3e0ed69f3592dc8f418e07c6075b130 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:43:56.086216  507510 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17965-468241/.minikube/ca.key
	I0116 03:43:56.086293  507510 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.key
	I0116 03:43:56.086385  507510 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/old-k8s-version-696770/client.key
	I0116 03:43:56.086447  507510 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/old-k8s-version-696770/apiserver.key.1a2d2382
	I0116 03:43:56.086480  507510 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/old-k8s-version-696770/proxy-client.key
	I0116 03:43:56.086668  507510 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478.pem (1338 bytes)
	W0116 03:43:56.086711  507510 certs.go:433] ignoring /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478_empty.pem, impossibly tiny 0 bytes
	I0116 03:43:56.086721  507510 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca-key.pem (1679 bytes)
	I0116 03:43:56.086746  507510 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem (1078 bytes)
	I0116 03:43:56.086772  507510 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem (1123 bytes)
	I0116 03:43:56.086795  507510 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem (1679 bytes)
	I0116 03:43:56.086833  507510 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem (1708 bytes)
	I0116 03:43:56.087557  507510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/old-k8s-version-696770/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0116 03:43:56.118148  507510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/old-k8s-version-696770/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0116 03:43:56.146632  507510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/old-k8s-version-696770/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0116 03:43:56.177146  507510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/old-k8s-version-696770/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0116 03:43:56.208800  507510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 03:43:56.237097  507510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0116 03:43:56.264559  507510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 03:43:56.294383  507510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 03:43:56.323966  507510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478.pem --> /usr/share/ca-certificates/475478.pem (1338 bytes)
	I0116 03:43:56.350120  507510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem --> /usr/share/ca-certificates/4754782.pem (1708 bytes)
	I0116 03:43:56.379523  507510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 03:43:56.406312  507510 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0116 03:43:56.426149  507510 ssh_runner.go:195] Run: openssl version
	I0116 03:43:56.432150  507510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 03:43:56.443200  507510 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:43:56.448268  507510 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 02:35 /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:43:56.448343  507510 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:43:56.454227  507510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 03:43:56.464467  507510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/475478.pem && ln -fs /usr/share/ca-certificates/475478.pem /etc/ssl/certs/475478.pem"
	I0116 03:43:56.474769  507510 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/475478.pem
	I0116 03:43:56.480143  507510 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 02:44 /usr/share/ca-certificates/475478.pem
	I0116 03:43:56.480228  507510 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/475478.pem
	I0116 03:43:56.487996  507510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/475478.pem /etc/ssl/certs/51391683.0"
	I0116 03:43:56.501097  507510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4754782.pem && ln -fs /usr/share/ca-certificates/4754782.pem /etc/ssl/certs/4754782.pem"
	I0116 03:43:56.513266  507510 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4754782.pem
	I0116 03:43:56.518806  507510 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 02:44 /usr/share/ca-certificates/4754782.pem
	I0116 03:43:56.518891  507510 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4754782.pem
	I0116 03:43:56.527891  507510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4754782.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 03:43:56.538719  507510 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 03:43:56.544298  507510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0116 03:43:56.551048  507510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0116 03:43:56.557847  507510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0116 03:43:56.567757  507510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0116 03:43:56.575977  507510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0116 03:43:56.584514  507510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0116 03:43:56.593191  507510 kubeadm.go:404] StartCluster: {Name:old-k8s-version-696770 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-696770 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.167 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 03:43:56.593333  507510 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0116 03:43:56.593408  507510 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 03:43:56.653791  507510 cri.go:89] found id: ""
	I0116 03:43:56.653899  507510 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0116 03:43:56.667037  507510 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0116 03:43:56.667078  507510 kubeadm.go:636] restartCluster start
	I0116 03:43:56.667164  507510 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0116 03:43:56.679734  507510 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:56.681241  507510 kubeconfig.go:92] found "old-k8s-version-696770" server: "https://192.168.61.167:8443"
	I0116 03:43:56.683942  507510 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0116 03:43:56.696409  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:43:56.696507  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:56.713120  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:57.196652  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:43:57.196826  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:57.213992  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:57.697096  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:43:57.697197  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:57.709671  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:58.197291  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:43:58.197401  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:58.214351  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:58.696893  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:43:58.697036  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:58.714549  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:59.197173  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:43:59.197304  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:59.213885  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:56.773238  507339 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.280261968s)
	I0116 03:43:56.773267  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:43:57.046716  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:43:57.123831  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:43:57.221179  507339 api_server.go:52] waiting for apiserver process to appear ...
	I0116 03:43:57.221300  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:43:57.721940  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:43:58.222437  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:43:58.722256  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:43:59.222191  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:43:59.721451  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:43:59.753520  507339 api_server.go:72] duration metric: took 2.532341035s to wait for apiserver process to appear ...
	I0116 03:43:59.753556  507339 api_server.go:88] waiting for apiserver healthz status ...
	I0116 03:43:59.753601  507339 api_server.go:253] Checking apiserver healthz at https://192.168.39.103:8443/healthz ...
	I0116 03:43:59.754176  507339 api_server.go:269] stopped: https://192.168.39.103:8443/healthz: Get "https://192.168.39.103:8443/healthz": dial tcp 192.168.39.103:8443: connect: connection refused
	I0116 03:44:00.253773  507339 api_server.go:253] Checking apiserver healthz at https://192.168.39.103:8443/healthz ...
	I0116 03:43:57.000501  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:43:57.070966  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | unable to find current IP address of domain default-k8s-diff-port-434445 in network mk-default-k8s-diff-port-434445
	I0116 03:43:57.071015  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | I0116 03:43:57.000987  508579 retry.go:31] will retry after 1.963105502s: waiting for machine to come up
	I0116 03:43:58.966528  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:43:58.967131  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | unable to find current IP address of domain default-k8s-diff-port-434445 in network mk-default-k8s-diff-port-434445
	I0116 03:43:58.967173  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | I0116 03:43:58.967069  508579 retry.go:31] will retry after 2.871455928s: waiting for machine to come up
	I0116 03:43:59.697215  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:43:59.697303  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:59.713992  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:00.196535  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:44:00.196649  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:00.212663  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:00.697276  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:44:00.697390  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:00.714622  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:01.197125  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:44:01.197242  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:01.214976  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:01.696506  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:44:01.696612  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:01.708204  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:02.197402  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:44:02.197519  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:02.211062  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:02.697230  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:44:02.697358  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:02.710340  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:03.196949  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:44:03.197047  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:03.213169  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:03.696657  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:44:03.696793  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:03.709422  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:04.196970  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:44:04.197083  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:04.209280  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:03.473725  507339 api_server.go:279] https://192.168.39.103:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 03:44:03.473764  507339 api_server.go:103] status: https://192.168.39.103:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 03:44:03.473784  507339 api_server.go:253] Checking apiserver healthz at https://192.168.39.103:8443/healthz ...
	I0116 03:44:03.531825  507339 api_server.go:279] https://192.168.39.103:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 03:44:03.531873  507339 api_server.go:103] status: https://192.168.39.103:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 03:44:03.754148  507339 api_server.go:253] Checking apiserver healthz at https://192.168.39.103:8443/healthz ...
	I0116 03:44:03.759138  507339 api_server.go:279] https://192.168.39.103:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 03:44:03.759171  507339 api_server.go:103] status: https://192.168.39.103:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 03:44:04.254321  507339 api_server.go:253] Checking apiserver healthz at https://192.168.39.103:8443/healthz ...
	I0116 03:44:04.259317  507339 api_server.go:279] https://192.168.39.103:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 03:44:04.259350  507339 api_server.go:103] status: https://192.168.39.103:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 03:44:04.753890  507339 api_server.go:253] Checking apiserver healthz at https://192.168.39.103:8443/healthz ...
	I0116 03:44:04.759714  507339 api_server.go:279] https://192.168.39.103:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 03:44:04.759747  507339 api_server.go:103] status: https://192.168.39.103:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 03:44:05.254582  507339 api_server.go:253] Checking apiserver healthz at https://192.168.39.103:8443/healthz ...
	I0116 03:44:05.264904  507339 api_server.go:279] https://192.168.39.103:8443/healthz returned 200:
	ok
	I0116 03:44:05.283700  507339 api_server.go:141] control plane version: v1.29.0-rc.2
	I0116 03:44:05.283737  507339 api_server.go:131] duration metric: took 5.53017208s to wait for apiserver health ...
	I0116 03:44:05.283749  507339 cni.go:84] Creating CNI manager for ""
	I0116 03:44:05.283757  507339 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:44:05.285715  507339 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 03:44:05.287393  507339 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 03:44:05.327883  507339 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 03:44:05.371856  507339 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 03:44:05.382614  507339 system_pods.go:59] 8 kube-system pods found
	I0116 03:44:05.382656  507339 system_pods.go:61] "coredns-76f75df574-lr95b" [15dc0b11-f7ec-4729-bbfa-79b9649fbad6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0116 03:44:05.382666  507339 system_pods.go:61] "etcd-no-preload-666547" [f98dcc57-5869-4a35-b443-2e6c9beddd88] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0116 03:44:05.382682  507339 system_pods.go:61] "kube-apiserver-no-preload-666547" [3aceae9d-224a-4e8c-a02d-ad4199d8d558] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0116 03:44:05.382699  507339 system_pods.go:61] "kube-controller-manager-no-preload-666547" [af39c89c-3b41-4d6f-b075-16de04a3ecc0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0116 03:44:05.382706  507339 system_pods.go:61] "kube-proxy-dcmrn" [1e91c96f-cbc5-424d-a09e-06e34bf7a2e2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0116 03:44:05.382714  507339 system_pods.go:61] "kube-scheduler-no-preload-666547" [0f2aa713-aebc-4858-a348-32169021235e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0116 03:44:05.382723  507339 system_pods.go:61] "metrics-server-57f55c9bc5-78vfj" [dbd2d3b2-ec0f-4253-8549-7c4299522c37] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:44:05.382735  507339 system_pods.go:61] "storage-provisioner" [f4e1ba45-217d-41d5-b583-2f60044879bc] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0116 03:44:05.382749  507339 system_pods.go:74] duration metric: took 10.858851ms to wait for pod list to return data ...
	I0116 03:44:05.382760  507339 node_conditions.go:102] verifying NodePressure condition ...
	I0116 03:44:05.391050  507339 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 03:44:05.391112  507339 node_conditions.go:123] node cpu capacity is 2
	I0116 03:44:05.391128  507339 node_conditions.go:105] duration metric: took 8.361426ms to run NodePressure ...
	I0116 03:44:05.391152  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:44:01.840907  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:01.841317  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | unable to find current IP address of domain default-k8s-diff-port-434445 in network mk-default-k8s-diff-port-434445
	I0116 03:44:01.841361  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | I0116 03:44:01.841259  508579 retry.go:31] will retry after 3.769759015s: waiting for machine to come up
	I0116 03:44:05.613594  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:05.614119  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | unable to find current IP address of domain default-k8s-diff-port-434445 in network mk-default-k8s-diff-port-434445
	I0116 03:44:05.614149  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | I0116 03:44:05.614062  508579 retry.go:31] will retry after 3.5833584s: waiting for machine to come up
	I0116 03:44:05.740205  507339 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0116 03:44:05.745269  507339 kubeadm.go:787] kubelet initialised
	I0116 03:44:05.745297  507339 kubeadm.go:788] duration metric: took 5.059802ms waiting for restarted kubelet to initialise ...
	I0116 03:44:05.745306  507339 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:44:05.751403  507339 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-lr95b" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:05.761740  507339 pod_ready.go:97] node "no-preload-666547" hosting pod "coredns-76f75df574-lr95b" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-666547" has status "Ready":"False"
	I0116 03:44:05.761784  507339 pod_ready.go:81] duration metric: took 10.344994ms waiting for pod "coredns-76f75df574-lr95b" in "kube-system" namespace to be "Ready" ...
	E0116 03:44:05.761796  507339 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-666547" hosting pod "coredns-76f75df574-lr95b" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-666547" has status "Ready":"False"
	I0116 03:44:05.761812  507339 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-666547" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:05.767627  507339 pod_ready.go:97] node "no-preload-666547" hosting pod "etcd-no-preload-666547" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-666547" has status "Ready":"False"
	I0116 03:44:05.767657  507339 pod_ready.go:81] duration metric: took 5.831478ms waiting for pod "etcd-no-preload-666547" in "kube-system" namespace to be "Ready" ...
	E0116 03:44:05.767669  507339 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-666547" hosting pod "etcd-no-preload-666547" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-666547" has status "Ready":"False"
	I0116 03:44:05.767677  507339 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-666547" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:05.772833  507339 pod_ready.go:97] node "no-preload-666547" hosting pod "kube-apiserver-no-preload-666547" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-666547" has status "Ready":"False"
	I0116 03:44:05.772863  507339 pod_ready.go:81] duration metric: took 5.17797ms waiting for pod "kube-apiserver-no-preload-666547" in "kube-system" namespace to be "Ready" ...
	E0116 03:44:05.772876  507339 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-666547" hosting pod "kube-apiserver-no-preload-666547" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-666547" has status "Ready":"False"
	I0116 03:44:05.772884  507339 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-666547" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:05.779234  507339 pod_ready.go:97] node "no-preload-666547" hosting pod "kube-controller-manager-no-preload-666547" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-666547" has status "Ready":"False"
	I0116 03:44:05.779259  507339 pod_ready.go:81] duration metric: took 6.362264ms waiting for pod "kube-controller-manager-no-preload-666547" in "kube-system" namespace to be "Ready" ...
	E0116 03:44:05.779270  507339 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-666547" hosting pod "kube-controller-manager-no-preload-666547" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-666547" has status "Ready":"False"
	I0116 03:44:05.779277  507339 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-dcmrn" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:06.175807  507339 pod_ready.go:97] node "no-preload-666547" hosting pod "kube-proxy-dcmrn" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-666547" has status "Ready":"False"
	I0116 03:44:06.175846  507339 pod_ready.go:81] duration metric: took 396.551923ms waiting for pod "kube-proxy-dcmrn" in "kube-system" namespace to be "Ready" ...
	E0116 03:44:06.175859  507339 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-666547" hosting pod "kube-proxy-dcmrn" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-666547" has status "Ready":"False"
	I0116 03:44:06.175867  507339 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-666547" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:06.580068  507339 pod_ready.go:97] node "no-preload-666547" hosting pod "kube-scheduler-no-preload-666547" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-666547" has status "Ready":"False"
	I0116 03:44:06.580102  507339 pod_ready.go:81] duration metric: took 404.226447ms waiting for pod "kube-scheduler-no-preload-666547" in "kube-system" namespace to be "Ready" ...
	E0116 03:44:06.580119  507339 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-666547" hosting pod "kube-scheduler-no-preload-666547" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-666547" has status "Ready":"False"
	I0116 03:44:06.580128  507339 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:06.976542  507339 pod_ready.go:97] node "no-preload-666547" hosting pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-666547" has status "Ready":"False"
	I0116 03:44:06.976573  507339 pod_ready.go:81] duration metric: took 396.432925ms waiting for pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace to be "Ready" ...
	E0116 03:44:06.976590  507339 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-666547" hosting pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-666547" has status "Ready":"False"
	I0116 03:44:06.976596  507339 pod_ready.go:38] duration metric: took 1.231281598s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:44:06.976621  507339 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0116 03:44:06.988884  507339 ops.go:34] apiserver oom_adj: -16
	I0116 03:44:06.988916  507339 kubeadm.go:640] restartCluster took 21.755069193s
	I0116 03:44:06.988940  507339 kubeadm.go:406] StartCluster complete in 21.811388098s
	I0116 03:44:06.988970  507339 settings.go:142] acquiring lock: {Name:mk3ded99c06d285b174e76622bb0741c8dd2d8fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:44:06.989066  507339 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17965-468241/kubeconfig
	I0116 03:44:06.990912  507339 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-468241/kubeconfig: {Name:mkac3c63bbcba9b4fa1bea3480797757d703fb9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:44:06.991191  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0116 03:44:06.991241  507339 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0116 03:44:06.991341  507339 addons.go:69] Setting storage-provisioner=true in profile "no-preload-666547"
	I0116 03:44:06.991362  507339 addons.go:234] Setting addon storage-provisioner=true in "no-preload-666547"
	I0116 03:44:06.991364  507339 addons.go:69] Setting default-storageclass=true in profile "no-preload-666547"
	W0116 03:44:06.991370  507339 addons.go:243] addon storage-provisioner should already be in state true
	I0116 03:44:06.991388  507339 addons.go:69] Setting metrics-server=true in profile "no-preload-666547"
	I0116 03:44:06.991397  507339 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-666547"
	I0116 03:44:06.991404  507339 addons.go:234] Setting addon metrics-server=true in "no-preload-666547"
	W0116 03:44:06.991412  507339 addons.go:243] addon metrics-server should already be in state true
	I0116 03:44:06.991438  507339 host.go:66] Checking if "no-preload-666547" exists ...
	I0116 03:44:06.991451  507339 host.go:66] Checking if "no-preload-666547" exists ...
	I0116 03:44:06.991460  507339 config.go:182] Loaded profile config "no-preload-666547": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0116 03:44:06.991855  507339 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:44:06.991855  507339 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:44:06.991893  507339 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:44:06.991858  507339 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:44:06.991940  507339 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:44:06.991976  507339 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:44:06.998037  507339 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-666547" context rescaled to 1 replicas
	I0116 03:44:06.998086  507339 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.103 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 03:44:07.000312  507339 out.go:177] * Verifying Kubernetes components...
	I0116 03:44:07.001889  507339 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:44:07.009057  507339 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41459
	I0116 03:44:07.009097  507339 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38693
	I0116 03:44:07.009596  507339 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:44:07.009735  507339 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:44:07.010178  507339 main.go:141] libmachine: Using API Version  1
	I0116 03:44:07.010195  507339 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:44:07.010368  507339 main.go:141] libmachine: Using API Version  1
	I0116 03:44:07.010392  507339 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:44:07.010412  507339 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33905
	I0116 03:44:07.010763  507339 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:44:07.010822  507339 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:44:07.010829  507339 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:44:07.010945  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetState
	I0116 03:44:07.011314  507339 main.go:141] libmachine: Using API Version  1
	I0116 03:44:07.011346  507339 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:44:07.011955  507339 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:44:07.011956  507339 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:44:07.012052  507339 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:44:07.012511  507339 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:44:07.012547  507339 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:44:07.015214  507339 addons.go:234] Setting addon default-storageclass=true in "no-preload-666547"
	W0116 03:44:07.015237  507339 addons.go:243] addon default-storageclass should already be in state true
	I0116 03:44:07.015269  507339 host.go:66] Checking if "no-preload-666547" exists ...
	I0116 03:44:07.015718  507339 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:44:07.015772  507339 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:44:07.029747  507339 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32807
	I0116 03:44:07.029990  507339 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35757
	I0116 03:44:07.030392  507339 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:44:07.030448  507339 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:44:07.030948  507339 main.go:141] libmachine: Using API Version  1
	I0116 03:44:07.030970  507339 main.go:141] libmachine: Using API Version  1
	I0116 03:44:07.030986  507339 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:44:07.031046  507339 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:44:07.031393  507339 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:44:07.031443  507339 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:44:07.031603  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetState
	I0116 03:44:07.031660  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetState
	I0116 03:44:07.033898  507339 main.go:141] libmachine: (no-preload-666547) Calling .DriverName
	I0116 03:44:07.033990  507339 main.go:141] libmachine: (no-preload-666547) Calling .DriverName
	I0116 03:44:07.036581  507339 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0116 03:44:07.034407  507339 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38687
	I0116 03:44:07.038382  507339 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0116 03:44:07.038420  507339 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0116 03:44:07.038444  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHHostname
	I0116 03:44:07.038499  507339 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:44:07.040190  507339 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 03:44:07.040211  507339 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0116 03:44:07.040232  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHHostname
	I0116 03:44:07.039010  507339 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:44:07.040908  507339 main.go:141] libmachine: Using API Version  1
	I0116 03:44:07.040931  507339 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:44:07.041538  507339 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:44:07.042268  507339 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:44:07.042319  507339 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:44:07.043270  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:44:07.043665  507339 main.go:141] libmachine: (no-preload-666547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:5f:03", ip: ""} in network mk-no-preload-666547: {Iface:virbr2 ExpiryTime:2024-01-16 04:43:15 +0000 UTC Type:0 Mac:52:54:00:4e:5f:03 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:no-preload-666547 Clientid:01:52:54:00:4e:5f:03}
	I0116 03:44:07.043697  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined IP address 192.168.39.103 and MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:44:07.043730  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:44:07.043966  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHPort
	I0116 03:44:07.044196  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHKeyPath
	I0116 03:44:07.044381  507339 main.go:141] libmachine: (no-preload-666547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:5f:03", ip: ""} in network mk-no-preload-666547: {Iface:virbr2 ExpiryTime:2024-01-16 04:43:15 +0000 UTC Type:0 Mac:52:54:00:4e:5f:03 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:no-preload-666547 Clientid:01:52:54:00:4e:5f:03}
	I0116 03:44:07.044422  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined IP address 192.168.39.103 and MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:44:07.044456  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHUsername
	I0116 03:44:07.044566  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHPort
	I0116 03:44:07.044691  507339 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/no-preload-666547/id_rsa Username:docker}
	I0116 03:44:07.044716  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHKeyPath
	I0116 03:44:07.044878  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHUsername
	I0116 03:44:07.045028  507339 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/no-preload-666547/id_rsa Username:docker}
	I0116 03:44:07.084507  507339 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46191
	I0116 03:44:07.085014  507339 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:44:07.085601  507339 main.go:141] libmachine: Using API Version  1
	I0116 03:44:07.085636  507339 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:44:07.086005  507339 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:44:07.086202  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetState
	I0116 03:44:07.088199  507339 main.go:141] libmachine: (no-preload-666547) Calling .DriverName
	I0116 03:44:07.088513  507339 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0116 03:44:07.088532  507339 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0116 03:44:07.088555  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHHostname
	I0116 03:44:07.092194  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:44:07.092719  507339 main.go:141] libmachine: (no-preload-666547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:5f:03", ip: ""} in network mk-no-preload-666547: {Iface:virbr2 ExpiryTime:2024-01-16 04:43:15 +0000 UTC Type:0 Mac:52:54:00:4e:5f:03 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:no-preload-666547 Clientid:01:52:54:00:4e:5f:03}
	I0116 03:44:07.092745  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined IP address 192.168.39.103 and MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:44:07.092953  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHPort
	I0116 03:44:07.093219  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHKeyPath
	I0116 03:44:07.093384  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHUsername
	I0116 03:44:07.093590  507339 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/no-preload-666547/id_rsa Username:docker}
	I0116 03:44:07.196191  507339 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0116 03:44:07.196219  507339 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0116 03:44:07.201036  507339 node_ready.go:35] waiting up to 6m0s for node "no-preload-666547" to be "Ready" ...
	I0116 03:44:07.201055  507339 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0116 03:44:07.222924  507339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0116 03:44:07.224548  507339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 03:44:07.237091  507339 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0116 03:44:07.237119  507339 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0116 03:44:07.289312  507339 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 03:44:07.289342  507339 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0116 03:44:07.334708  507339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 03:44:07.583740  507339 main.go:141] libmachine: Making call to close driver server
	I0116 03:44:07.583773  507339 main.go:141] libmachine: (no-preload-666547) Calling .Close
	I0116 03:44:07.584079  507339 main.go:141] libmachine: (no-preload-666547) DBG | Closing plugin on server side
	I0116 03:44:07.584135  507339 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:44:07.584146  507339 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:44:07.584155  507339 main.go:141] libmachine: Making call to close driver server
	I0116 03:44:07.584170  507339 main.go:141] libmachine: (no-preload-666547) Calling .Close
	I0116 03:44:07.584405  507339 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:44:07.584423  507339 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:44:07.592304  507339 main.go:141] libmachine: Making call to close driver server
	I0116 03:44:07.592332  507339 main.go:141] libmachine: (no-preload-666547) Calling .Close
	I0116 03:44:07.592608  507339 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:44:07.592656  507339 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:44:07.592663  507339 main.go:141] libmachine: (no-preload-666547) DBG | Closing plugin on server side
	I0116 03:44:08.290558  507339 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.065965685s)
	I0116 03:44:08.290643  507339 main.go:141] libmachine: Making call to close driver server
	I0116 03:44:08.290665  507339 main.go:141] libmachine: (no-preload-666547) Calling .Close
	I0116 03:44:08.291042  507339 main.go:141] libmachine: (no-preload-666547) DBG | Closing plugin on server side
	I0116 03:44:08.291103  507339 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:44:08.291121  507339 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:44:08.291136  507339 main.go:141] libmachine: Making call to close driver server
	I0116 03:44:08.291147  507339 main.go:141] libmachine: (no-preload-666547) Calling .Close
	I0116 03:44:08.291380  507339 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:44:08.291396  507339 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:44:08.291416  507339 main.go:141] libmachine: (no-preload-666547) DBG | Closing plugin on server side
	I0116 03:44:08.468146  507339 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.133348135s)
	I0116 03:44:08.468223  507339 main.go:141] libmachine: Making call to close driver server
	I0116 03:44:08.468248  507339 main.go:141] libmachine: (no-preload-666547) Calling .Close
	I0116 03:44:08.470360  507339 main.go:141] libmachine: (no-preload-666547) DBG | Closing plugin on server side
	I0116 03:44:08.470367  507339 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:44:08.470397  507339 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:44:08.470412  507339 main.go:141] libmachine: Making call to close driver server
	I0116 03:44:08.470423  507339 main.go:141] libmachine: (no-preload-666547) Calling .Close
	I0116 03:44:08.470734  507339 main.go:141] libmachine: (no-preload-666547) DBG | Closing plugin on server side
	I0116 03:44:08.470749  507339 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:44:08.470764  507339 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:44:08.470776  507339 addons.go:470] Verifying addon metrics-server=true in "no-preload-666547"
	I0116 03:44:08.473092  507339 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0116 03:44:04.697359  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:44:04.697510  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:04.714690  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:05.197225  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:44:05.197333  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:05.213923  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:05.696541  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:44:05.696632  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:05.713744  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:06.197249  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:44:06.197369  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:06.209148  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:06.696967  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:44:06.697083  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:06.709624  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:06.709656  507510 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0116 03:44:06.709665  507510 kubeadm.go:1135] stopping kube-system containers ...
	I0116 03:44:06.709676  507510 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0116 03:44:06.709736  507510 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 03:44:06.753286  507510 cri.go:89] found id: ""
	I0116 03:44:06.753370  507510 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0116 03:44:06.769990  507510 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 03:44:06.781090  507510 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 03:44:06.781168  507510 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 03:44:06.790936  507510 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0116 03:44:06.790971  507510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:44:06.915790  507510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:44:08.112494  507510 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.196668404s)
	I0116 03:44:08.112528  507510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:44:08.328365  507510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:44:08.435410  507510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:44:08.576950  507510 api_server.go:52] waiting for apiserver process to appear ...
	I0116 03:44:08.577077  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:44:09.077263  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:44:08.474544  507339 addons.go:505] enable addons completed in 1.483307386s: enabled=[default-storageclass storage-provisioner metrics-server]
	I0116 03:44:09.206584  507339 node_ready.go:58] node "no-preload-666547" has status "Ready":"False"
	I0116 03:44:10.997580  507257 start.go:369] acquired machines lock for "embed-certs-615980" in 1m2.194717115s
	I0116 03:44:10.997669  507257 start.go:96] Skipping create...Using existing machine configuration
	I0116 03:44:10.997681  507257 fix.go:54] fixHost starting: 
	I0116 03:44:10.998101  507257 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:44:10.998135  507257 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:44:11.017060  507257 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38343
	I0116 03:44:11.017687  507257 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:44:11.018295  507257 main.go:141] libmachine: Using API Version  1
	I0116 03:44:11.018326  507257 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:44:11.018673  507257 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:44:11.018879  507257 main.go:141] libmachine: (embed-certs-615980) Calling .DriverName
	I0116 03:44:11.019056  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetState
	I0116 03:44:11.021360  507257 fix.go:102] recreateIfNeeded on embed-certs-615980: state=Stopped err=<nil>
	I0116 03:44:11.021396  507257 main.go:141] libmachine: (embed-certs-615980) Calling .DriverName
	W0116 03:44:11.021577  507257 fix.go:128] unexpected machine state, will restart: <nil>
	I0116 03:44:11.023462  507257 out.go:177] * Restarting existing kvm2 VM for "embed-certs-615980" ...
	I0116 03:44:11.025158  507257 main.go:141] libmachine: (embed-certs-615980) Calling .Start
	I0116 03:44:11.025397  507257 main.go:141] libmachine: (embed-certs-615980) Ensuring networks are active...
	I0116 03:44:11.026354  507257 main.go:141] libmachine: (embed-certs-615980) Ensuring network default is active
	I0116 03:44:11.026830  507257 main.go:141] libmachine: (embed-certs-615980) Ensuring network mk-embed-certs-615980 is active
	I0116 03:44:11.027263  507257 main.go:141] libmachine: (embed-certs-615980) Getting domain xml...
	I0116 03:44:11.028182  507257 main.go:141] libmachine: (embed-certs-615980) Creating domain...
	I0116 03:44:09.198824  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:09.199284  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Found IP for machine: 192.168.50.236
	I0116 03:44:09.199318  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Reserving static IP address...
	I0116 03:44:09.199348  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has current primary IP address 192.168.50.236 and MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:09.199756  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-434445", mac: "52:54:00:78:ea:d5", ip: "192.168.50.236"} in network mk-default-k8s-diff-port-434445: {Iface:virbr1 ExpiryTime:2024-01-16 04:44:01 +0000 UTC Type:0 Mac:52:54:00:78:ea:d5 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:default-k8s-diff-port-434445 Clientid:01:52:54:00:78:ea:d5}
	I0116 03:44:09.199781  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | skip adding static IP to network mk-default-k8s-diff-port-434445 - found existing host DHCP lease matching {name: "default-k8s-diff-port-434445", mac: "52:54:00:78:ea:d5", ip: "192.168.50.236"}
	I0116 03:44:09.199794  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Reserved static IP address: 192.168.50.236
	I0116 03:44:09.199808  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for SSH to be available...
	I0116 03:44:09.199832  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | Getting to WaitForSSH function...
	I0116 03:44:09.202093  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:09.202494  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ea:d5", ip: ""} in network mk-default-k8s-diff-port-434445: {Iface:virbr1 ExpiryTime:2024-01-16 04:44:01 +0000 UTC Type:0 Mac:52:54:00:78:ea:d5 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:default-k8s-diff-port-434445 Clientid:01:52:54:00:78:ea:d5}
	I0116 03:44:09.202529  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined IP address 192.168.50.236 and MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:09.202664  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | Using SSH client type: external
	I0116 03:44:09.202690  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | Using SSH private key: /home/jenkins/minikube-integration/17965-468241/.minikube/machines/default-k8s-diff-port-434445/id_rsa (-rw-------)
	I0116 03:44:09.202723  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.236 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17965-468241/.minikube/machines/default-k8s-diff-port-434445/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0116 03:44:09.202746  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | About to run SSH command:
	I0116 03:44:09.202763  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | exit 0
	I0116 03:44:09.302425  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | SSH cmd err, output: <nil>: 
	I0116 03:44:09.302867  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetConfigRaw
	I0116 03:44:09.303666  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetIP
	I0116 03:44:09.306482  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:09.306884  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ea:d5", ip: ""} in network mk-default-k8s-diff-port-434445: {Iface:virbr1 ExpiryTime:2024-01-16 04:44:01 +0000 UTC Type:0 Mac:52:54:00:78:ea:d5 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:default-k8s-diff-port-434445 Clientid:01:52:54:00:78:ea:d5}
	I0116 03:44:09.306920  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined IP address 192.168.50.236 and MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:09.307189  507889 profile.go:148] Saving config to /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/default-k8s-diff-port-434445/config.json ...
	I0116 03:44:09.307418  507889 machine.go:88] provisioning docker machine ...
	I0116 03:44:09.307437  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .DriverName
	I0116 03:44:09.307673  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetMachineName
	I0116 03:44:09.307865  507889 buildroot.go:166] provisioning hostname "default-k8s-diff-port-434445"
	I0116 03:44:09.307886  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetMachineName
	I0116 03:44:09.308073  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHHostname
	I0116 03:44:09.310375  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:09.310726  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ea:d5", ip: ""} in network mk-default-k8s-diff-port-434445: {Iface:virbr1 ExpiryTime:2024-01-16 04:44:01 +0000 UTC Type:0 Mac:52:54:00:78:ea:d5 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:default-k8s-diff-port-434445 Clientid:01:52:54:00:78:ea:d5}
	I0116 03:44:09.310765  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined IP address 192.168.50.236 and MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:09.310920  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHPort
	I0116 03:44:09.311111  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHKeyPath
	I0116 03:44:09.311231  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHKeyPath
	I0116 03:44:09.311384  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHUsername
	I0116 03:44:09.311528  507889 main.go:141] libmachine: Using SSH client type: native
	I0116 03:44:09.311932  507889 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.50.236 22 <nil> <nil>}
	I0116 03:44:09.311949  507889 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-434445 && echo "default-k8s-diff-port-434445" | sudo tee /etc/hostname
	I0116 03:44:09.469340  507889 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-434445
	
	I0116 03:44:09.469384  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHHostname
	I0116 03:44:09.472788  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:09.473108  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ea:d5", ip: ""} in network mk-default-k8s-diff-port-434445: {Iface:virbr1 ExpiryTime:2024-01-16 04:44:01 +0000 UTC Type:0 Mac:52:54:00:78:ea:d5 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:default-k8s-diff-port-434445 Clientid:01:52:54:00:78:ea:d5}
	I0116 03:44:09.473166  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined IP address 192.168.50.236 and MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:09.473353  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHPort
	I0116 03:44:09.473571  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHKeyPath
	I0116 03:44:09.473768  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHKeyPath
	I0116 03:44:09.473963  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHUsername
	I0116 03:44:09.474171  507889 main.go:141] libmachine: Using SSH client type: native
	I0116 03:44:09.474626  507889 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.50.236 22 <nil> <nil>}
	I0116 03:44:09.474657  507889 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-434445' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-434445/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-434445' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 03:44:09.622177  507889 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 03:44:09.622223  507889 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17965-468241/.minikube CaCertPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17965-468241/.minikube}
	I0116 03:44:09.622253  507889 buildroot.go:174] setting up certificates
	I0116 03:44:09.622267  507889 provision.go:83] configureAuth start
	I0116 03:44:09.622280  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetMachineName
	I0116 03:44:09.622649  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetIP
	I0116 03:44:09.625970  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:09.626394  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ea:d5", ip: ""} in network mk-default-k8s-diff-port-434445: {Iface:virbr1 ExpiryTime:2024-01-16 04:44:01 +0000 UTC Type:0 Mac:52:54:00:78:ea:d5 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:default-k8s-diff-port-434445 Clientid:01:52:54:00:78:ea:d5}
	I0116 03:44:09.626429  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined IP address 192.168.50.236 and MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:09.626603  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHHostname
	I0116 03:44:09.629623  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:09.630022  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ea:d5", ip: ""} in network mk-default-k8s-diff-port-434445: {Iface:virbr1 ExpiryTime:2024-01-16 04:44:01 +0000 UTC Type:0 Mac:52:54:00:78:ea:d5 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:default-k8s-diff-port-434445 Clientid:01:52:54:00:78:ea:d5}
	I0116 03:44:09.630052  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined IP address 192.168.50.236 and MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:09.630263  507889 provision.go:138] copyHostCerts
	I0116 03:44:09.630354  507889 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem, removing ...
	I0116 03:44:09.630370  507889 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem
	I0116 03:44:09.630449  507889 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem (1078 bytes)
	I0116 03:44:09.630603  507889 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem, removing ...
	I0116 03:44:09.630626  507889 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem
	I0116 03:44:09.630661  507889 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem (1123 bytes)
	I0116 03:44:09.630760  507889 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem, removing ...
	I0116 03:44:09.630775  507889 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem
	I0116 03:44:09.630805  507889 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem (1679 bytes)
	I0116 03:44:09.630891  507889 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-434445 san=[192.168.50.236 192.168.50.236 localhost 127.0.0.1 minikube default-k8s-diff-port-434445]
	I0116 03:44:10.127058  507889 provision.go:172] copyRemoteCerts
	I0116 03:44:10.127138  507889 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 03:44:10.127175  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHHostname
	I0116 03:44:10.130572  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:10.131095  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ea:d5", ip: ""} in network mk-default-k8s-diff-port-434445: {Iface:virbr1 ExpiryTime:2024-01-16 04:44:01 +0000 UTC Type:0 Mac:52:54:00:78:ea:d5 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:default-k8s-diff-port-434445 Clientid:01:52:54:00:78:ea:d5}
	I0116 03:44:10.131133  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined IP address 192.168.50.236 and MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:10.131313  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHPort
	I0116 03:44:10.131590  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHKeyPath
	I0116 03:44:10.131825  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHUsername
	I0116 03:44:10.132001  507889 sshutil.go:53] new ssh client: &{IP:192.168.50.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/default-k8s-diff-port-434445/id_rsa Username:docker}
	I0116 03:44:10.238263  507889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0116 03:44:10.269567  507889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0116 03:44:10.295065  507889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0116 03:44:10.323347  507889 provision.go:86] duration metric: configureAuth took 701.062063ms
	I0116 03:44:10.323391  507889 buildroot.go:189] setting minikube options for container-runtime
	I0116 03:44:10.323667  507889 config.go:182] Loaded profile config "default-k8s-diff-port-434445": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 03:44:10.323774  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHHostname
	I0116 03:44:10.326825  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:10.327222  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ea:d5", ip: ""} in network mk-default-k8s-diff-port-434445: {Iface:virbr1 ExpiryTime:2024-01-16 04:44:01 +0000 UTC Type:0 Mac:52:54:00:78:ea:d5 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:default-k8s-diff-port-434445 Clientid:01:52:54:00:78:ea:d5}
	I0116 03:44:10.327266  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined IP address 192.168.50.236 and MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:10.327423  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHPort
	I0116 03:44:10.327682  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHKeyPath
	I0116 03:44:10.327883  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHKeyPath
	I0116 03:44:10.328077  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHUsername
	I0116 03:44:10.328269  507889 main.go:141] libmachine: Using SSH client type: native
	I0116 03:44:10.328743  507889 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.50.236 22 <nil> <nil>}
	I0116 03:44:10.328778  507889 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 03:44:10.700188  507889 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 03:44:10.700221  507889 machine.go:91] provisioned docker machine in 1.392790129s
	I0116 03:44:10.700232  507889 start.go:300] post-start starting for "default-k8s-diff-port-434445" (driver="kvm2")
	I0116 03:44:10.700244  507889 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 03:44:10.700261  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .DriverName
	I0116 03:44:10.700745  507889 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 03:44:10.700786  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHHostname
	I0116 03:44:10.704466  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:10.705001  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ea:d5", ip: ""} in network mk-default-k8s-diff-port-434445: {Iface:virbr1 ExpiryTime:2024-01-16 04:44:01 +0000 UTC Type:0 Mac:52:54:00:78:ea:d5 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:default-k8s-diff-port-434445 Clientid:01:52:54:00:78:ea:d5}
	I0116 03:44:10.705045  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined IP address 192.168.50.236 and MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:10.705278  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHPort
	I0116 03:44:10.705503  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHKeyPath
	I0116 03:44:10.705735  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHUsername
	I0116 03:44:10.705912  507889 sshutil.go:53] new ssh client: &{IP:192.168.50.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/default-k8s-diff-port-434445/id_rsa Username:docker}
	I0116 03:44:10.807625  507889 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 03:44:10.813392  507889 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 03:44:10.813428  507889 filesync.go:126] Scanning /home/jenkins/minikube-integration/17965-468241/.minikube/addons for local assets ...
	I0116 03:44:10.813519  507889 filesync.go:126] Scanning /home/jenkins/minikube-integration/17965-468241/.minikube/files for local assets ...
	I0116 03:44:10.813596  507889 filesync.go:149] local asset: /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem -> 4754782.pem in /etc/ssl/certs
	I0116 03:44:10.813687  507889 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 03:44:10.824028  507889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem --> /etc/ssl/certs/4754782.pem (1708 bytes)
	I0116 03:44:10.853453  507889 start.go:303] post-start completed in 153.201453ms
	I0116 03:44:10.853493  507889 fix.go:56] fixHost completed within 22.144172966s
	I0116 03:44:10.853543  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHHostname
	I0116 03:44:10.856529  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:10.856907  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ea:d5", ip: ""} in network mk-default-k8s-diff-port-434445: {Iface:virbr1 ExpiryTime:2024-01-16 04:44:01 +0000 UTC Type:0 Mac:52:54:00:78:ea:d5 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:default-k8s-diff-port-434445 Clientid:01:52:54:00:78:ea:d5}
	I0116 03:44:10.856967  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined IP address 192.168.50.236 and MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:10.857185  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHPort
	I0116 03:44:10.857438  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHKeyPath
	I0116 03:44:10.857636  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHKeyPath
	I0116 03:44:10.857790  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHUsername
	I0116 03:44:10.857974  507889 main.go:141] libmachine: Using SSH client type: native
	I0116 03:44:10.858502  507889 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.50.236 22 <nil> <nil>}
	I0116 03:44:10.858528  507889 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0116 03:44:10.997398  507889 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705376650.933903671
	
	I0116 03:44:10.997426  507889 fix.go:206] guest clock: 1705376650.933903671
	I0116 03:44:10.997436  507889 fix.go:219] Guest: 2024-01-16 03:44:10.933903671 +0000 UTC Remote: 2024-01-16 03:44:10.853498317 +0000 UTC m=+234.302480786 (delta=80.405354ms)
	I0116 03:44:10.997464  507889 fix.go:190] guest clock delta is within tolerance: 80.405354ms
	I0116 03:44:10.997471  507889 start.go:83] releasing machines lock for "default-k8s-diff-port-434445", held for 22.288188395s
	I0116 03:44:10.997517  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .DriverName
	I0116 03:44:10.997857  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetIP
	I0116 03:44:11.001410  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:11.001814  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ea:d5", ip: ""} in network mk-default-k8s-diff-port-434445: {Iface:virbr1 ExpiryTime:2024-01-16 04:44:01 +0000 UTC Type:0 Mac:52:54:00:78:ea:d5 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:default-k8s-diff-port-434445 Clientid:01:52:54:00:78:ea:d5}
	I0116 03:44:11.001864  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined IP address 192.168.50.236 and MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:11.002016  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .DriverName
	I0116 03:44:11.002649  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .DriverName
	I0116 03:44:11.002923  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .DriverName
	I0116 03:44:11.003015  507889 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 03:44:11.003068  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHHostname
	I0116 03:44:11.003258  507889 ssh_runner.go:195] Run: cat /version.json
	I0116 03:44:11.003294  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHHostname
	I0116 03:44:11.006383  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:11.006699  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:11.006800  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ea:d5", ip: ""} in network mk-default-k8s-diff-port-434445: {Iface:virbr1 ExpiryTime:2024-01-16 04:44:01 +0000 UTC Type:0 Mac:52:54:00:78:ea:d5 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:default-k8s-diff-port-434445 Clientid:01:52:54:00:78:ea:d5}
	I0116 03:44:11.006850  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined IP address 192.168.50.236 and MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:11.007123  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHPort
	I0116 03:44:11.007230  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ea:d5", ip: ""} in network mk-default-k8s-diff-port-434445: {Iface:virbr1 ExpiryTime:2024-01-16 04:44:01 +0000 UTC Type:0 Mac:52:54:00:78:ea:d5 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:default-k8s-diff-port-434445 Clientid:01:52:54:00:78:ea:d5}
	I0116 03:44:11.007330  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined IP address 192.168.50.236 and MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:11.007353  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHKeyPath
	I0116 03:44:11.007378  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHPort
	I0116 03:44:11.007585  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHUsername
	I0116 03:44:11.007597  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHKeyPath
	I0116 03:44:11.007737  507889 sshutil.go:53] new ssh client: &{IP:192.168.50.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/default-k8s-diff-port-434445/id_rsa Username:docker}
	I0116 03:44:11.007795  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHUsername
	I0116 03:44:11.007980  507889 sshutil.go:53] new ssh client: &{IP:192.168.50.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/default-k8s-diff-port-434445/id_rsa Username:docker}
	I0116 03:44:11.139882  507889 ssh_runner.go:195] Run: systemctl --version
	I0116 03:44:11.147082  507889 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 03:44:11.317582  507889 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0116 03:44:11.324567  507889 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 03:44:11.324656  507889 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 03:44:11.348193  507889 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0116 03:44:11.348225  507889 start.go:475] detecting cgroup driver to use...
	I0116 03:44:11.348319  507889 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 03:44:11.367049  507889 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 03:44:11.386632  507889 docker.go:217] disabling cri-docker service (if available) ...
	I0116 03:44:11.386713  507889 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 03:44:11.409551  507889 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 03:44:11.424599  507889 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 03:44:11.586480  507889 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 03:44:11.733770  507889 docker.go:233] disabling docker service ...
	I0116 03:44:11.733855  507889 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 03:44:11.751184  507889 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 03:44:11.766970  507889 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 03:44:11.903645  507889 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 03:44:12.017100  507889 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 03:44:12.031725  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 03:44:12.052091  507889 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0116 03:44:12.052179  507889 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:44:12.063115  507889 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 03:44:12.063219  507889 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:44:12.073109  507889 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:44:12.083438  507889 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:44:12.095783  507889 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 03:44:12.107816  507889 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 03:44:12.117997  507889 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0116 03:44:12.118077  507889 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0116 03:44:12.132997  507889 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 03:44:12.145200  507889 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 03:44:12.266786  507889 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 03:44:12.460779  507889 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 03:44:12.460892  507889 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 03:44:12.469200  507889 start.go:543] Will wait 60s for crictl version
	I0116 03:44:12.469305  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:44:12.473761  507889 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 03:44:12.536262  507889 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0116 03:44:12.536382  507889 ssh_runner.go:195] Run: crio --version
	I0116 03:44:12.593212  507889 ssh_runner.go:195] Run: crio --version
	I0116 03:44:12.650197  507889 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0116 03:44:09.577389  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:44:10.077774  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:44:10.578076  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:44:10.613091  507510 api_server.go:72] duration metric: took 2.036140794s to wait for apiserver process to appear ...
	I0116 03:44:10.613124  507510 api_server.go:88] waiting for apiserver healthz status ...
	I0116 03:44:10.613148  507510 api_server.go:253] Checking apiserver healthz at https://192.168.61.167:8443/healthz ...
	I0116 03:44:11.706731  507339 node_ready.go:58] node "no-preload-666547" has status "Ready":"False"
	I0116 03:44:13.713926  507339 node_ready.go:49] node "no-preload-666547" has status "Ready":"True"
	I0116 03:44:13.713958  507339 node_ready.go:38] duration metric: took 6.512893933s waiting for node "no-preload-666547" to be "Ready" ...
	I0116 03:44:13.713972  507339 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:44:13.727930  507339 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-lr95b" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:14.740352  507339 pod_ready.go:92] pod "coredns-76f75df574-lr95b" in "kube-system" namespace has status "Ready":"True"
	I0116 03:44:14.740392  507339 pod_ready.go:81] duration metric: took 1.012371035s waiting for pod "coredns-76f75df574-lr95b" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:14.740408  507339 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-666547" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:11.442223  507257 main.go:141] libmachine: (embed-certs-615980) Waiting to get IP...
	I0116 03:44:11.443346  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:11.443787  507257 main.go:141] libmachine: (embed-certs-615980) DBG | unable to find current IP address of domain embed-certs-615980 in network mk-embed-certs-615980
	I0116 03:44:11.443851  507257 main.go:141] libmachine: (embed-certs-615980) DBG | I0116 03:44:11.443761  508731 retry.go:31] will retry after 306.7144ms: waiting for machine to come up
	I0116 03:44:11.752574  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:11.753186  507257 main.go:141] libmachine: (embed-certs-615980) DBG | unable to find current IP address of domain embed-certs-615980 in network mk-embed-certs-615980
	I0116 03:44:11.753217  507257 main.go:141] libmachine: (embed-certs-615980) DBG | I0116 03:44:11.753126  508731 retry.go:31] will retry after 270.011585ms: waiting for machine to come up
	I0116 03:44:12.024942  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:12.025507  507257 main.go:141] libmachine: (embed-certs-615980) DBG | unable to find current IP address of domain embed-certs-615980 in network mk-embed-certs-615980
	I0116 03:44:12.025548  507257 main.go:141] libmachine: (embed-certs-615980) DBG | I0116 03:44:12.025433  508731 retry.go:31] will retry after 328.680313ms: waiting for machine to come up
	I0116 03:44:12.355989  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:12.356557  507257 main.go:141] libmachine: (embed-certs-615980) DBG | unable to find current IP address of domain embed-certs-615980 in network mk-embed-certs-615980
	I0116 03:44:12.356582  507257 main.go:141] libmachine: (embed-certs-615980) DBG | I0116 03:44:12.356493  508731 retry.go:31] will retry after 598.194786ms: waiting for machine to come up
	I0116 03:44:12.956170  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:12.956754  507257 main.go:141] libmachine: (embed-certs-615980) DBG | unable to find current IP address of domain embed-certs-615980 in network mk-embed-certs-615980
	I0116 03:44:12.956782  507257 main.go:141] libmachine: (embed-certs-615980) DBG | I0116 03:44:12.956673  508731 retry.go:31] will retry after 713.891978ms: waiting for machine to come up
	I0116 03:44:13.672728  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:13.673741  507257 main.go:141] libmachine: (embed-certs-615980) DBG | unable to find current IP address of domain embed-certs-615980 in network mk-embed-certs-615980
	I0116 03:44:13.673772  507257 main.go:141] libmachine: (embed-certs-615980) DBG | I0116 03:44:13.673636  508731 retry.go:31] will retry after 789.579297ms: waiting for machine to come up
	I0116 03:44:14.464913  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:14.465532  507257 main.go:141] libmachine: (embed-certs-615980) DBG | unable to find current IP address of domain embed-certs-615980 in network mk-embed-certs-615980
	I0116 03:44:14.465567  507257 main.go:141] libmachine: (embed-certs-615980) DBG | I0116 03:44:14.465446  508731 retry.go:31] will retry after 744.319122ms: waiting for machine to come up
	I0116 03:44:15.211748  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:15.212356  507257 main.go:141] libmachine: (embed-certs-615980) DBG | unable to find current IP address of domain embed-certs-615980 in network mk-embed-certs-615980
	I0116 03:44:15.212389  507257 main.go:141] libmachine: (embed-certs-615980) DBG | I0116 03:44:15.212282  508731 retry.go:31] will retry after 1.231175582s: waiting for machine to come up
	I0116 03:44:12.652092  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetIP
	I0116 03:44:12.655815  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:12.656308  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ea:d5", ip: ""} in network mk-default-k8s-diff-port-434445: {Iface:virbr1 ExpiryTime:2024-01-16 04:44:01 +0000 UTC Type:0 Mac:52:54:00:78:ea:d5 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:default-k8s-diff-port-434445 Clientid:01:52:54:00:78:ea:d5}
	I0116 03:44:12.656383  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined IP address 192.168.50.236 and MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:12.656790  507889 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0116 03:44:12.661880  507889 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 03:44:12.677695  507889 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 03:44:12.677794  507889 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 03:44:12.731676  507889 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0116 03:44:12.731794  507889 ssh_runner.go:195] Run: which lz4
	I0116 03:44:12.736614  507889 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0116 03:44:12.741554  507889 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0116 03:44:12.741595  507889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0116 03:44:15.047223  507889 crio.go:444] Took 2.310653 seconds to copy over tarball
	I0116 03:44:15.047386  507889 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0116 03:44:15.614559  507510 api_server.go:269] stopped: https://192.168.61.167:8443/healthz: Get "https://192.168.61.167:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0116 03:44:15.614617  507510 api_server.go:253] Checking apiserver healthz at https://192.168.61.167:8443/healthz ...
	I0116 03:44:16.992197  507510 api_server.go:279] https://192.168.61.167:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 03:44:16.992236  507510 api_server.go:103] status: https://192.168.61.167:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 03:44:16.992255  507510 api_server.go:253] Checking apiserver healthz at https://192.168.61.167:8443/healthz ...
	I0116 03:44:17.098327  507510 api_server.go:279] https://192.168.61.167:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 03:44:17.098365  507510 api_server.go:103] status: https://192.168.61.167:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 03:44:17.113518  507510 api_server.go:253] Checking apiserver healthz at https://192.168.61.167:8443/healthz ...
	I0116 03:44:17.133276  507510 api_server.go:279] https://192.168.61.167:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W0116 03:44:17.133308  507510 api_server.go:103] status: https://192.168.61.167:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I0116 03:44:17.613843  507510 api_server.go:253] Checking apiserver healthz at https://192.168.61.167:8443/healthz ...
	I0116 03:44:17.621074  507510 api_server.go:279] https://192.168.61.167:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0116 03:44:17.621131  507510 api_server.go:103] status: https://192.168.61.167:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0116 03:44:18.113648  507510 api_server.go:253] Checking apiserver healthz at https://192.168.61.167:8443/healthz ...
	I0116 03:44:18.936452  507510 api_server.go:279] https://192.168.61.167:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0116 03:44:18.936492  507510 api_server.go:103] status: https://192.168.61.167:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0116 03:44:18.936521  507510 api_server.go:253] Checking apiserver healthz at https://192.168.61.167:8443/healthz ...
	I0116 03:44:19.466220  507510 api_server.go:279] https://192.168.61.167:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0116 03:44:19.466259  507510 api_server.go:103] status: https://192.168.61.167:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0116 03:44:19.466278  507510 api_server.go:253] Checking apiserver healthz at https://192.168.61.167:8443/healthz ...
	I0116 03:44:16.750170  507339 pod_ready.go:102] pod "etcd-no-preload-666547" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:19.438168  507339 pod_ready.go:92] pod "etcd-no-preload-666547" in "kube-system" namespace has status "Ready":"True"
	I0116 03:44:19.438207  507339 pod_ready.go:81] duration metric: took 4.697789344s waiting for pod "etcd-no-preload-666547" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:19.438224  507339 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-666547" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:19.445845  507339 pod_ready.go:92] pod "kube-apiserver-no-preload-666547" in "kube-system" namespace has status "Ready":"True"
	I0116 03:44:19.445875  507339 pod_ready.go:81] duration metric: took 7.641191ms waiting for pod "kube-apiserver-no-preload-666547" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:19.445889  507339 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-666547" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:19.452468  507339 pod_ready.go:92] pod "kube-controller-manager-no-preload-666547" in "kube-system" namespace has status "Ready":"True"
	I0116 03:44:19.452491  507339 pod_ready.go:81] duration metric: took 6.593311ms waiting for pod "kube-controller-manager-no-preload-666547" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:19.452500  507339 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dcmrn" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:19.459542  507339 pod_ready.go:92] pod "kube-proxy-dcmrn" in "kube-system" namespace has status "Ready":"True"
	I0116 03:44:19.459576  507339 pod_ready.go:81] duration metric: took 7.067817ms waiting for pod "kube-proxy-dcmrn" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:19.459591  507339 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-666547" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:19.966827  507339 pod_ready.go:92] pod "kube-scheduler-no-preload-666547" in "kube-system" namespace has status "Ready":"True"
	I0116 03:44:19.966867  507339 pod_ready.go:81] duration metric: took 507.26823ms waiting for pod "kube-scheduler-no-preload-666547" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:19.966878  507339 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:19.946145  507510 api_server.go:279] https://192.168.61.167:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0116 03:44:19.946209  507510 api_server.go:103] status: https://192.168.61.167:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0116 03:44:19.946230  507510 api_server.go:253] Checking apiserver healthz at https://192.168.61.167:8443/healthz ...
	I0116 03:44:20.259035  507510 api_server.go:279] https://192.168.61.167:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0116 03:44:20.259091  507510 api_server.go:103] status: https://192.168.61.167:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0116 03:44:20.259142  507510 api_server.go:253] Checking apiserver healthz at https://192.168.61.167:8443/healthz ...
	I0116 03:44:20.330196  507510 api_server.go:279] https://192.168.61.167:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0116 03:44:20.330237  507510 api_server.go:103] status: https://192.168.61.167:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0116 03:44:20.613624  507510 api_server.go:253] Checking apiserver healthz at https://192.168.61.167:8443/healthz ...
	I0116 03:44:20.621956  507510 api_server.go:279] https://192.168.61.167:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0116 03:44:20.622008  507510 api_server.go:103] status: https://192.168.61.167:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0116 03:44:21.113536  507510 api_server.go:253] Checking apiserver healthz at https://192.168.61.167:8443/healthz ...
	I0116 03:44:21.125326  507510 api_server.go:279] https://192.168.61.167:8443/healthz returned 200:
	ok
	I0116 03:44:21.137555  507510 api_server.go:141] control plane version: v1.16.0
	I0116 03:44:21.137602  507510 api_server.go:131] duration metric: took 10.524468396s to wait for apiserver health ...
	I0116 03:44:21.137616  507510 cni.go:84] Creating CNI manager for ""
	I0116 03:44:21.137625  507510 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:44:21.139682  507510 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 03:44:16.445685  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:16.446216  507257 main.go:141] libmachine: (embed-certs-615980) DBG | unable to find current IP address of domain embed-certs-615980 in network mk-embed-certs-615980
	I0116 03:44:16.446246  507257 main.go:141] libmachine: (embed-certs-615980) DBG | I0116 03:44:16.446137  508731 retry.go:31] will retry after 1.400972s: waiting for machine to come up
	I0116 03:44:17.848447  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:17.848964  507257 main.go:141] libmachine: (embed-certs-615980) DBG | unable to find current IP address of domain embed-certs-615980 in network mk-embed-certs-615980
	I0116 03:44:17.848991  507257 main.go:141] libmachine: (embed-certs-615980) DBG | I0116 03:44:17.848916  508731 retry.go:31] will retry after 2.293115324s: waiting for machine to come up
	I0116 03:44:20.145242  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:20.145899  507257 main.go:141] libmachine: (embed-certs-615980) DBG | unable to find current IP address of domain embed-certs-615980 in network mk-embed-certs-615980
	I0116 03:44:20.145933  507257 main.go:141] libmachine: (embed-certs-615980) DBG | I0116 03:44:20.145842  508731 retry.go:31] will retry after 2.158183619s: waiting for machine to come up
	I0116 03:44:18.744370  507889 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.696918616s)
	I0116 03:44:18.744426  507889 crio.go:451] Took 3.697118 seconds to extract the tarball
	I0116 03:44:18.744440  507889 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0116 03:44:18.792685  507889 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 03:44:18.868262  507889 crio.go:496] all images are preloaded for cri-o runtime.
	I0116 03:44:18.868291  507889 cache_images.go:84] Images are preloaded, skipping loading
	I0116 03:44:18.868382  507889 ssh_runner.go:195] Run: crio config
	I0116 03:44:18.954026  507889 cni.go:84] Creating CNI manager for ""
	I0116 03:44:18.954060  507889 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:44:18.954085  507889 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 03:44:18.954138  507889 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.236 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-434445 NodeName:default-k8s-diff-port-434445 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.236"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.236 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0116 03:44:18.954362  507889 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.236
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-434445"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.236
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.236"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 03:44:18.954483  507889 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-434445 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.236
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-434445 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0116 03:44:18.954557  507889 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0116 03:44:18.966046  507889 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 03:44:18.966143  507889 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0116 03:44:18.977441  507889 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I0116 03:44:18.997304  507889 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0116 03:44:19.016597  507889 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I0116 03:44:19.035635  507889 ssh_runner.go:195] Run: grep 192.168.50.236	control-plane.minikube.internal$ /etc/hosts
	I0116 03:44:19.039882  507889 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.236	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 03:44:19.053342  507889 certs.go:56] Setting up /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/default-k8s-diff-port-434445 for IP: 192.168.50.236
	I0116 03:44:19.053383  507889 certs.go:190] acquiring lock for shared ca certs: {Name:mkab5aeea3e0ed69f3592dc8f418e07c6075b130 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:44:19.053580  507889 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17965-468241/.minikube/ca.key
	I0116 03:44:19.053655  507889 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.key
	I0116 03:44:19.053773  507889 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/default-k8s-diff-port-434445/client.key
	I0116 03:44:19.053920  507889 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/default-k8s-diff-port-434445/apiserver.key.4e4dee8d
	I0116 03:44:19.053994  507889 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/default-k8s-diff-port-434445/proxy-client.key
	I0116 03:44:19.054154  507889 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478.pem (1338 bytes)
	W0116 03:44:19.054198  507889 certs.go:433] ignoring /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478_empty.pem, impossibly tiny 0 bytes
	I0116 03:44:19.054215  507889 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca-key.pem (1679 bytes)
	I0116 03:44:19.054249  507889 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem (1078 bytes)
	I0116 03:44:19.054286  507889 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem (1123 bytes)
	I0116 03:44:19.054318  507889 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem (1679 bytes)
	I0116 03:44:19.054373  507889 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem (1708 bytes)
	I0116 03:44:19.055259  507889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/default-k8s-diff-port-434445/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0116 03:44:19.086636  507889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/default-k8s-diff-port-434445/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0116 03:44:19.117759  507889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/default-k8s-diff-port-434445/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0116 03:44:19.144530  507889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/default-k8s-diff-port-434445/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0116 03:44:19.170423  507889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 03:44:19.198224  507889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0116 03:44:19.223514  507889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 03:44:19.250858  507889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 03:44:19.276922  507889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem --> /usr/share/ca-certificates/4754782.pem (1708 bytes)
	I0116 03:44:19.302621  507889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 03:44:19.330021  507889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478.pem --> /usr/share/ca-certificates/475478.pem (1338 bytes)
	I0116 03:44:19.358108  507889 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0116 03:44:19.379126  507889 ssh_runner.go:195] Run: openssl version
	I0116 03:44:19.386675  507889 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 03:44:19.398759  507889 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:44:19.404201  507889 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 02:35 /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:44:19.404283  507889 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:44:19.411067  507889 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 03:44:19.422608  507889 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/475478.pem && ln -fs /usr/share/ca-certificates/475478.pem /etc/ssl/certs/475478.pem"
	I0116 03:44:19.434422  507889 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/475478.pem
	I0116 03:44:19.440018  507889 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 02:44 /usr/share/ca-certificates/475478.pem
	I0116 03:44:19.440103  507889 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/475478.pem
	I0116 03:44:19.446469  507889 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/475478.pem /etc/ssl/certs/51391683.0"
	I0116 03:44:19.460130  507889 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4754782.pem && ln -fs /usr/share/ca-certificates/4754782.pem /etc/ssl/certs/4754782.pem"
	I0116 03:44:19.473886  507889 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4754782.pem
	I0116 03:44:19.478781  507889 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 02:44 /usr/share/ca-certificates/4754782.pem
	I0116 03:44:19.478858  507889 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4754782.pem
	I0116 03:44:19.484826  507889 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4754782.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 03:44:19.495710  507889 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 03:44:19.500842  507889 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0116 03:44:19.507646  507889 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0116 03:44:19.515247  507889 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0116 03:44:19.523964  507889 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0116 03:44:19.532379  507889 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0116 03:44:19.540067  507889 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0116 03:44:19.548614  507889 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-434445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-434445 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.50.236 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 03:44:19.548812  507889 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0116 03:44:19.548900  507889 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 03:44:19.595803  507889 cri.go:89] found id: ""
	I0116 03:44:19.595910  507889 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0116 03:44:19.610615  507889 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0116 03:44:19.610647  507889 kubeadm.go:636] restartCluster start
	I0116 03:44:19.610726  507889 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0116 03:44:19.624175  507889 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:19.625683  507889 kubeconfig.go:92] found "default-k8s-diff-port-434445" server: "https://192.168.50.236:8444"
	I0116 03:44:19.628685  507889 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0116 03:44:19.640309  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:19.640390  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:19.653938  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:20.141193  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:20.141285  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:20.154331  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:20.640562  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:20.640691  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:20.657774  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:21.141268  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:21.141371  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:21.158792  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:21.141315  507510 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 03:44:21.168450  507510 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 03:44:21.206907  507510 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 03:44:21.222998  507510 system_pods.go:59] 7 kube-system pods found
	I0116 03:44:21.223072  507510 system_pods.go:61] "coredns-5644d7b6d9-7q4wc" [003ba660-e3c5-4a98-be67-75e43dc32b37] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0116 03:44:21.223084  507510 system_pods.go:61] "etcd-old-k8s-version-696770" [b029f446-15b1-4720-af3a-b651b778fc0d] Running
	I0116 03:44:21.223094  507510 system_pods.go:61] "kube-apiserver-old-k8s-version-696770" [a9597e33-db8c-48e5-b119-d6d97d8d8e3f] Running
	I0116 03:44:21.223114  507510 system_pods.go:61] "kube-controller-manager-old-k8s-version-696770" [901fd518-04a1-4de0-baa2-08c7d57a587d] Running
	I0116 03:44:21.223123  507510 system_pods.go:61] "kube-proxy-9pfdj" [ac00ed93-abe8-4f53-8e63-fa63589fbf5c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0116 03:44:21.223134  507510 system_pods.go:61] "kube-scheduler-old-k8s-version-696770" [a8d74e76-6c22-4d82-b954-4025dff18279] Running
	I0116 03:44:21.223146  507510 system_pods.go:61] "storage-provisioner" [b04dacf9-8137-4f22-ae36-147d08fd9b60] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0116 03:44:21.223158  507510 system_pods.go:74] duration metric: took 16.220665ms to wait for pod list to return data ...
	I0116 03:44:21.223173  507510 node_conditions.go:102] verifying NodePressure condition ...
	I0116 03:44:21.228670  507510 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 03:44:21.228715  507510 node_conditions.go:123] node cpu capacity is 2
	I0116 03:44:21.228734  507510 node_conditions.go:105] duration metric: took 5.552228ms to run NodePressure ...
	I0116 03:44:21.228760  507510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:44:21.576565  507510 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0116 03:44:21.581017  507510 retry.go:31] will retry after 323.975879ms: kubelet not initialised
	I0116 03:44:21.914790  507510 retry.go:31] will retry after 258.393503ms: kubelet not initialised
	I0116 03:44:22.180592  507510 retry.go:31] will retry after 582.791922ms: kubelet not initialised
	I0116 03:44:22.769880  507510 retry.go:31] will retry after 961.779974ms: kubelet not initialised
	I0116 03:44:23.739015  507510 retry.go:31] will retry after 686.353156ms: kubelet not initialised
	I0116 03:44:24.431951  507510 retry.go:31] will retry after 2.073440094s: kubelet not initialised
	I0116 03:44:21.976301  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:23.977710  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:22.305212  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:22.305701  507257 main.go:141] libmachine: (embed-certs-615980) DBG | unable to find current IP address of domain embed-certs-615980 in network mk-embed-certs-615980
	I0116 03:44:22.305732  507257 main.go:141] libmachine: (embed-certs-615980) DBG | I0116 03:44:22.305662  508731 retry.go:31] will retry after 3.080436267s: waiting for machine to come up
	I0116 03:44:25.389414  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:25.389850  507257 main.go:141] libmachine: (embed-certs-615980) DBG | unable to find current IP address of domain embed-certs-615980 in network mk-embed-certs-615980
	I0116 03:44:25.389875  507257 main.go:141] libmachine: (embed-certs-615980) DBG | I0116 03:44:25.389828  508731 retry.go:31] will retry after 2.730339967s: waiting for machine to come up
	I0116 03:44:21.640823  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:21.641083  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:21.656391  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:22.141134  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:22.141242  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:22.157848  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:22.641247  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:22.641371  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:22.654425  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:23.140719  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:23.140827  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:23.153823  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:23.641193  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:23.641298  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:23.654061  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:24.141196  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:24.141290  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:24.161415  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:24.640416  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:24.640514  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:24.670258  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:25.140571  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:25.140673  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:25.157823  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:25.641188  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:25.641284  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:25.655917  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:26.141241  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:26.141357  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:26.157447  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:26.511961  507510 retry.go:31] will retry after 4.006598367s: kubelet not initialised
	I0116 03:44:26.473653  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:28.474914  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:28.122340  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:28.122704  507257 main.go:141] libmachine: (embed-certs-615980) DBG | unable to find current IP address of domain embed-certs-615980 in network mk-embed-certs-615980
	I0116 03:44:28.122735  507257 main.go:141] libmachine: (embed-certs-615980) DBG | I0116 03:44:28.122676  508731 retry.go:31] will retry after 4.170800657s: waiting for machine to come up
	I0116 03:44:26.641408  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:26.641510  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:26.654505  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:27.141033  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:27.141129  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:27.154208  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:27.640701  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:27.640785  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:27.653964  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:28.141330  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:28.141406  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:28.153419  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:28.640986  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:28.641076  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:28.654357  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:29.141250  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:29.141335  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:29.154899  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:29.640619  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:29.640717  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:29.654653  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:29.654692  507889 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0116 03:44:29.654701  507889 kubeadm.go:1135] stopping kube-system containers ...
	I0116 03:44:29.654713  507889 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0116 03:44:29.654769  507889 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 03:44:29.697617  507889 cri.go:89] found id: ""
	I0116 03:44:29.697719  507889 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0116 03:44:29.719069  507889 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 03:44:29.735791  507889 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 03:44:29.735872  507889 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 03:44:29.748788  507889 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0116 03:44:29.748823  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:44:29.874894  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:44:30.787232  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:44:31.009234  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:44:31.136220  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:44:31.215330  507889 api_server.go:52] waiting for apiserver process to appear ...
	I0116 03:44:31.215416  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:44:30.526372  507510 retry.go:31] will retry after 4.363756335s: kubelet not initialised
	I0116 03:44:32.295936  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:32.296442  507257 main.go:141] libmachine: (embed-certs-615980) Found IP for machine: 192.168.72.159
	I0116 03:44:32.296483  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has current primary IP address 192.168.72.159 and MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:32.296499  507257 main.go:141] libmachine: (embed-certs-615980) Reserving static IP address...
	I0116 03:44:32.297078  507257 main.go:141] libmachine: (embed-certs-615980) DBG | found host DHCP lease matching {name: "embed-certs-615980", mac: "52:54:00:d4:a6:40", ip: "192.168.72.159"} in network mk-embed-certs-615980: {Iface:virbr4 ExpiryTime:2024-01-16 04:44:24 +0000 UTC Type:0 Mac:52:54:00:d4:a6:40 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-615980 Clientid:01:52:54:00:d4:a6:40}
	I0116 03:44:32.297121  507257 main.go:141] libmachine: (embed-certs-615980) Reserved static IP address: 192.168.72.159
	I0116 03:44:32.297140  507257 main.go:141] libmachine: (embed-certs-615980) DBG | skip adding static IP to network mk-embed-certs-615980 - found existing host DHCP lease matching {name: "embed-certs-615980", mac: "52:54:00:d4:a6:40", ip: "192.168.72.159"}
	I0116 03:44:32.297160  507257 main.go:141] libmachine: (embed-certs-615980) DBG | Getting to WaitForSSH function...
	I0116 03:44:32.297179  507257 main.go:141] libmachine: (embed-certs-615980) Waiting for SSH to be available...
	I0116 03:44:32.299440  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:32.299839  507257 main.go:141] libmachine: (embed-certs-615980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:a6:40", ip: ""} in network mk-embed-certs-615980: {Iface:virbr4 ExpiryTime:2024-01-16 04:44:24 +0000 UTC Type:0 Mac:52:54:00:d4:a6:40 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-615980 Clientid:01:52:54:00:d4:a6:40}
	I0116 03:44:32.299870  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined IP address 192.168.72.159 and MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:32.300064  507257 main.go:141] libmachine: (embed-certs-615980) DBG | Using SSH client type: external
	I0116 03:44:32.300098  507257 main.go:141] libmachine: (embed-certs-615980) DBG | Using SSH private key: /home/jenkins/minikube-integration/17965-468241/.minikube/machines/embed-certs-615980/id_rsa (-rw-------)
	I0116 03:44:32.300133  507257 main.go:141] libmachine: (embed-certs-615980) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.159 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17965-468241/.minikube/machines/embed-certs-615980/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0116 03:44:32.300153  507257 main.go:141] libmachine: (embed-certs-615980) DBG | About to run SSH command:
	I0116 03:44:32.300172  507257 main.go:141] libmachine: (embed-certs-615980) DBG | exit 0
	I0116 03:44:32.396718  507257 main.go:141] libmachine: (embed-certs-615980) DBG | SSH cmd err, output: <nil>: 
	I0116 03:44:32.397111  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetConfigRaw
	I0116 03:44:32.397901  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetIP
	I0116 03:44:32.400997  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:32.401502  507257 main.go:141] libmachine: (embed-certs-615980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:a6:40", ip: ""} in network mk-embed-certs-615980: {Iface:virbr4 ExpiryTime:2024-01-16 04:44:24 +0000 UTC Type:0 Mac:52:54:00:d4:a6:40 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-615980 Clientid:01:52:54:00:d4:a6:40}
	I0116 03:44:32.401540  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined IP address 192.168.72.159 and MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:32.402036  507257 profile.go:148] Saving config to /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/embed-certs-615980/config.json ...
	I0116 03:44:32.402259  507257 machine.go:88] provisioning docker machine ...
	I0116 03:44:32.402281  507257 main.go:141] libmachine: (embed-certs-615980) Calling .DriverName
	I0116 03:44:32.402539  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetMachineName
	I0116 03:44:32.402759  507257 buildroot.go:166] provisioning hostname "embed-certs-615980"
	I0116 03:44:32.402786  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetMachineName
	I0116 03:44:32.402966  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHHostname
	I0116 03:44:32.405935  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:32.406344  507257 main.go:141] libmachine: (embed-certs-615980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:a6:40", ip: ""} in network mk-embed-certs-615980: {Iface:virbr4 ExpiryTime:2024-01-16 04:44:24 +0000 UTC Type:0 Mac:52:54:00:d4:a6:40 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-615980 Clientid:01:52:54:00:d4:a6:40}
	I0116 03:44:32.406384  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined IP address 192.168.72.159 and MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:32.406585  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHPort
	I0116 03:44:32.406821  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHKeyPath
	I0116 03:44:32.407054  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHKeyPath
	I0116 03:44:32.407219  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHUsername
	I0116 03:44:32.407399  507257 main.go:141] libmachine: Using SSH client type: native
	I0116 03:44:32.407754  507257 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I0116 03:44:32.407768  507257 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-615980 && echo "embed-certs-615980" | sudo tee /etc/hostname
	I0116 03:44:32.561584  507257 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-615980
	
	I0116 03:44:32.561618  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHHostname
	I0116 03:44:32.564566  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:32.565004  507257 main.go:141] libmachine: (embed-certs-615980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:a6:40", ip: ""} in network mk-embed-certs-615980: {Iface:virbr4 ExpiryTime:2024-01-16 04:44:24 +0000 UTC Type:0 Mac:52:54:00:d4:a6:40 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-615980 Clientid:01:52:54:00:d4:a6:40}
	I0116 03:44:32.565033  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined IP address 192.168.72.159 and MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:32.565232  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHPort
	I0116 03:44:32.565481  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHKeyPath
	I0116 03:44:32.565672  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHKeyPath
	I0116 03:44:32.565843  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHUsername
	I0116 03:44:32.566045  507257 main.go:141] libmachine: Using SSH client type: native
	I0116 03:44:32.566521  507257 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I0116 03:44:32.566549  507257 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-615980' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-615980/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-615980' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 03:44:32.718945  507257 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 03:44:32.719005  507257 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17965-468241/.minikube CaCertPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17965-468241/.minikube}
	I0116 03:44:32.719037  507257 buildroot.go:174] setting up certificates
	I0116 03:44:32.719051  507257 provision.go:83] configureAuth start
	I0116 03:44:32.719081  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetMachineName
	I0116 03:44:32.719397  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetIP
	I0116 03:44:32.722474  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:32.722938  507257 main.go:141] libmachine: (embed-certs-615980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:a6:40", ip: ""} in network mk-embed-certs-615980: {Iface:virbr4 ExpiryTime:2024-01-16 04:44:24 +0000 UTC Type:0 Mac:52:54:00:d4:a6:40 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-615980 Clientid:01:52:54:00:d4:a6:40}
	I0116 03:44:32.722972  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined IP address 192.168.72.159 and MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:32.723136  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHHostname
	I0116 03:44:32.725821  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:32.726246  507257 main.go:141] libmachine: (embed-certs-615980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:a6:40", ip: ""} in network mk-embed-certs-615980: {Iface:virbr4 ExpiryTime:2024-01-16 04:44:24 +0000 UTC Type:0 Mac:52:54:00:d4:a6:40 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-615980 Clientid:01:52:54:00:d4:a6:40}
	I0116 03:44:32.726277  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined IP address 192.168.72.159 and MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:32.726448  507257 provision.go:138] copyHostCerts
	I0116 03:44:32.726535  507257 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem, removing ...
	I0116 03:44:32.726622  507257 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem
	I0116 03:44:32.726769  507257 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem (1078 bytes)
	I0116 03:44:32.726971  507257 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem, removing ...
	I0116 03:44:32.726983  507257 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem
	I0116 03:44:32.727015  507257 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem (1123 bytes)
	I0116 03:44:32.727099  507257 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem, removing ...
	I0116 03:44:32.727116  507257 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem
	I0116 03:44:32.727144  507257 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem (1679 bytes)
	I0116 03:44:32.727212  507257 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca-key.pem org=jenkins.embed-certs-615980 san=[192.168.72.159 192.168.72.159 localhost 127.0.0.1 minikube embed-certs-615980]
	I0116 03:44:32.921694  507257 provision.go:172] copyRemoteCerts
	I0116 03:44:32.921764  507257 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 03:44:32.921798  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHHostname
	I0116 03:44:32.924951  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:32.925329  507257 main.go:141] libmachine: (embed-certs-615980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:a6:40", ip: ""} in network mk-embed-certs-615980: {Iface:virbr4 ExpiryTime:2024-01-16 04:44:24 +0000 UTC Type:0 Mac:52:54:00:d4:a6:40 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-615980 Clientid:01:52:54:00:d4:a6:40}
	I0116 03:44:32.925362  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined IP address 192.168.72.159 and MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:32.925534  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHPort
	I0116 03:44:32.925855  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHKeyPath
	I0116 03:44:32.926135  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHUsername
	I0116 03:44:32.926390  507257 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/embed-certs-615980/id_rsa Username:docker}
	I0116 03:44:33.025856  507257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0116 03:44:33.055403  507257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0116 03:44:33.087908  507257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0116 03:44:33.116847  507257 provision.go:86] duration metric: configureAuth took 397.777297ms
	I0116 03:44:33.116886  507257 buildroot.go:189] setting minikube options for container-runtime
	I0116 03:44:33.117136  507257 config.go:182] Loaded profile config "embed-certs-615980": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 03:44:33.117267  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHHostname
	I0116 03:44:33.120452  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:33.120915  507257 main.go:141] libmachine: (embed-certs-615980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:a6:40", ip: ""} in network mk-embed-certs-615980: {Iface:virbr4 ExpiryTime:2024-01-16 04:44:24 +0000 UTC Type:0 Mac:52:54:00:d4:a6:40 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-615980 Clientid:01:52:54:00:d4:a6:40}
	I0116 03:44:33.120949  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined IP address 192.168.72.159 and MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:33.121189  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHPort
	I0116 03:44:33.121442  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHKeyPath
	I0116 03:44:33.121636  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHKeyPath
	I0116 03:44:33.121778  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHUsername
	I0116 03:44:33.121966  507257 main.go:141] libmachine: Using SSH client type: native
	I0116 03:44:33.122333  507257 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I0116 03:44:33.122359  507257 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 03:44:33.486009  507257 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 03:44:33.486147  507257 machine.go:91] provisioned docker machine in 1.083869863s
	I0116 03:44:33.486202  507257 start.go:300] post-start starting for "embed-certs-615980" (driver="kvm2")
	I0116 03:44:33.486239  507257 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 03:44:33.486282  507257 main.go:141] libmachine: (embed-certs-615980) Calling .DriverName
	I0116 03:44:33.486719  507257 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 03:44:33.486755  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHHostname
	I0116 03:44:33.490226  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:33.490676  507257 main.go:141] libmachine: (embed-certs-615980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:a6:40", ip: ""} in network mk-embed-certs-615980: {Iface:virbr4 ExpiryTime:2024-01-16 04:44:24 +0000 UTC Type:0 Mac:52:54:00:d4:a6:40 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-615980 Clientid:01:52:54:00:d4:a6:40}
	I0116 03:44:33.490743  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined IP address 192.168.72.159 and MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:33.490863  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHPort
	I0116 03:44:33.491117  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHKeyPath
	I0116 03:44:33.491299  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHUsername
	I0116 03:44:33.491478  507257 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/embed-certs-615980/id_rsa Username:docker}
	I0116 03:44:33.590039  507257 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 03:44:33.596095  507257 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 03:44:33.596124  507257 filesync.go:126] Scanning /home/jenkins/minikube-integration/17965-468241/.minikube/addons for local assets ...
	I0116 03:44:33.596206  507257 filesync.go:126] Scanning /home/jenkins/minikube-integration/17965-468241/.minikube/files for local assets ...
	I0116 03:44:33.596295  507257 filesync.go:149] local asset: /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem -> 4754782.pem in /etc/ssl/certs
	I0116 03:44:33.596437  507257 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 03:44:33.609260  507257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem --> /etc/ssl/certs/4754782.pem (1708 bytes)
	I0116 03:44:33.642578  507257 start.go:303] post-start completed in 156.336318ms
	I0116 03:44:33.642651  507257 fix.go:56] fixHost completed within 22.644969219s
	I0116 03:44:33.642703  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHHostname
	I0116 03:44:33.645616  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:33.645988  507257 main.go:141] libmachine: (embed-certs-615980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:a6:40", ip: ""} in network mk-embed-certs-615980: {Iface:virbr4 ExpiryTime:2024-01-16 04:44:24 +0000 UTC Type:0 Mac:52:54:00:d4:a6:40 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-615980 Clientid:01:52:54:00:d4:a6:40}
	I0116 03:44:33.646017  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined IP address 192.168.72.159 and MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:33.646277  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHPort
	I0116 03:44:33.646514  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHKeyPath
	I0116 03:44:33.646720  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHKeyPath
	I0116 03:44:33.646910  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHUsername
	I0116 03:44:33.647179  507257 main.go:141] libmachine: Using SSH client type: native
	I0116 03:44:33.647655  507257 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I0116 03:44:33.647682  507257 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0116 03:44:33.781805  507257 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705376673.706960834
	
	I0116 03:44:33.781839  507257 fix.go:206] guest clock: 1705376673.706960834
	I0116 03:44:33.781850  507257 fix.go:219] Guest: 2024-01-16 03:44:33.706960834 +0000 UTC Remote: 2024-01-16 03:44:33.642657737 +0000 UTC m=+367.429386706 (delta=64.303097ms)
	I0116 03:44:33.781879  507257 fix.go:190] guest clock delta is within tolerance: 64.303097ms
	I0116 03:44:33.781890  507257 start.go:83] releasing machines lock for "embed-certs-615980", held for 22.784266536s
	I0116 03:44:33.781917  507257 main.go:141] libmachine: (embed-certs-615980) Calling .DriverName
	I0116 03:44:33.782225  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetIP
	I0116 03:44:33.785113  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:33.785495  507257 main.go:141] libmachine: (embed-certs-615980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:a6:40", ip: ""} in network mk-embed-certs-615980: {Iface:virbr4 ExpiryTime:2024-01-16 04:44:24 +0000 UTC Type:0 Mac:52:54:00:d4:a6:40 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-615980 Clientid:01:52:54:00:d4:a6:40}
	I0116 03:44:33.785530  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined IP address 192.168.72.159 and MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:33.785718  507257 main.go:141] libmachine: (embed-certs-615980) Calling .DriverName
	I0116 03:44:33.786427  507257 main.go:141] libmachine: (embed-certs-615980) Calling .DriverName
	I0116 03:44:33.786655  507257 main.go:141] libmachine: (embed-certs-615980) Calling .DriverName
	I0116 03:44:33.786751  507257 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 03:44:33.786799  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHHostname
	I0116 03:44:33.786938  507257 ssh_runner.go:195] Run: cat /version.json
	I0116 03:44:33.786967  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHHostname
	I0116 03:44:33.790084  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:33.790288  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:33.790454  507257 main.go:141] libmachine: (embed-certs-615980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:a6:40", ip: ""} in network mk-embed-certs-615980: {Iface:virbr4 ExpiryTime:2024-01-16 04:44:24 +0000 UTC Type:0 Mac:52:54:00:d4:a6:40 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-615980 Clientid:01:52:54:00:d4:a6:40}
	I0116 03:44:33.790485  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined IP address 192.168.72.159 and MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:33.790655  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHPort
	I0116 03:44:33.790787  507257 main.go:141] libmachine: (embed-certs-615980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:a6:40", ip: ""} in network mk-embed-certs-615980: {Iface:virbr4 ExpiryTime:2024-01-16 04:44:24 +0000 UTC Type:0 Mac:52:54:00:d4:a6:40 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-615980 Clientid:01:52:54:00:d4:a6:40}
	I0116 03:44:33.790831  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined IP address 192.168.72.159 and MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:33.790899  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHKeyPath
	I0116 03:44:33.791007  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHPort
	I0116 03:44:33.791091  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHUsername
	I0116 03:44:33.791193  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHKeyPath
	I0116 03:44:33.791269  507257 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/embed-certs-615980/id_rsa Username:docker}
	I0116 03:44:33.791363  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHUsername
	I0116 03:44:33.791515  507257 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/embed-certs-615980/id_rsa Username:docker}
	I0116 03:44:33.907036  507257 ssh_runner.go:195] Run: systemctl --version
	I0116 03:44:33.913776  507257 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 03:44:34.062888  507257 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0116 03:44:34.070435  507257 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 03:44:34.070539  507257 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 03:44:34.091957  507257 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0116 03:44:34.091993  507257 start.go:475] detecting cgroup driver to use...
	I0116 03:44:34.092099  507257 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 03:44:34.108007  507257 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 03:44:34.123223  507257 docker.go:217] disabling cri-docker service (if available) ...
	I0116 03:44:34.123314  507257 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 03:44:34.141242  507257 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 03:44:34.157053  507257 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 03:44:34.274186  507257 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 03:44:34.427694  507257 docker.go:233] disabling docker service ...
	I0116 03:44:34.427785  507257 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 03:44:34.442789  507257 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 03:44:34.459761  507257 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 03:44:34.592453  507257 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 03:44:34.715991  507257 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 03:44:34.732175  507257 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 03:44:34.751885  507257 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0116 03:44:34.751989  507257 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:44:34.763769  507257 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 03:44:34.763853  507257 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:44:34.774444  507257 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:44:34.784975  507257 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:44:34.797634  507257 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 03:44:34.810962  507257 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 03:44:34.822224  507257 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0116 03:44:34.822314  507257 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0116 03:44:34.840500  507257 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 03:44:34.852285  507257 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 03:44:34.970828  507257 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 03:44:35.163097  507257 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 03:44:35.163242  507257 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 03:44:35.169041  507257 start.go:543] Will wait 60s for crictl version
	I0116 03:44:35.169150  507257 ssh_runner.go:195] Run: which crictl
	I0116 03:44:35.173367  507257 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 03:44:35.224951  507257 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0116 03:44:35.225043  507257 ssh_runner.go:195] Run: crio --version
	I0116 03:44:35.275230  507257 ssh_runner.go:195] Run: crio --version
	I0116 03:44:35.329852  507257 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0116 03:44:30.981714  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:33.476735  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:35.480715  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:35.331327  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetIP
	I0116 03:44:35.334148  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:35.334618  507257 main.go:141] libmachine: (embed-certs-615980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:a6:40", ip: ""} in network mk-embed-certs-615980: {Iface:virbr4 ExpiryTime:2024-01-16 04:44:24 +0000 UTC Type:0 Mac:52:54:00:d4:a6:40 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-615980 Clientid:01:52:54:00:d4:a6:40}
	I0116 03:44:35.334674  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined IP address 192.168.72.159 and MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:35.335166  507257 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0116 03:44:35.341389  507257 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 03:44:35.358757  507257 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 03:44:35.358866  507257 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 03:44:35.407869  507257 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0116 03:44:35.407983  507257 ssh_runner.go:195] Run: which lz4
	I0116 03:44:35.412533  507257 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0116 03:44:35.417266  507257 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0116 03:44:35.417303  507257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0116 03:44:31.715897  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:44:32.215978  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:44:32.716439  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:44:33.215609  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:44:33.715785  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:44:33.738611  507889 api_server.go:72] duration metric: took 2.523281585s to wait for apiserver process to appear ...
	I0116 03:44:33.738642  507889 api_server.go:88] waiting for apiserver healthz status ...
	I0116 03:44:33.738663  507889 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8444/healthz ...
	I0116 03:44:37.601011  507889 api_server.go:279] https://192.168.50.236:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 03:44:37.601052  507889 api_server.go:103] status: https://192.168.50.236:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 03:44:37.601072  507889 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8444/healthz ...
	I0116 03:44:37.678390  507889 api_server.go:279] https://192.168.50.236:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 03:44:37.678428  507889 api_server.go:103] status: https://192.168.50.236:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 03:44:37.739725  507889 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8444/healthz ...
	I0116 03:44:37.767384  507889 api_server.go:279] https://192.168.50.236:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 03:44:37.767425  507889 api_server.go:103] status: https://192.168.50.236:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 03:44:38.238992  507889 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8444/healthz ...
	I0116 03:44:38.253946  507889 api_server.go:279] https://192.168.50.236:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 03:44:38.253991  507889 api_server.go:103] status: https://192.168.50.236:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 03:44:38.738786  507889 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8444/healthz ...
	I0116 03:44:38.749091  507889 api_server.go:279] https://192.168.50.236:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 03:44:38.749135  507889 api_server.go:103] status: https://192.168.50.236:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 03:44:39.239814  507889 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8444/healthz ...
	I0116 03:44:39.245859  507889 api_server.go:279] https://192.168.50.236:8444/healthz returned 200:
	ok
	I0116 03:44:39.259198  507889 api_server.go:141] control plane version: v1.28.4
	I0116 03:44:39.259250  507889 api_server.go:131] duration metric: took 5.520598732s to wait for apiserver health ...
	I0116 03:44:39.259265  507889 cni.go:84] Creating CNI manager for ""
	I0116 03:44:39.259277  507889 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:44:39.261389  507889 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 03:44:34.897727  507510 retry.go:31] will retry after 6.879493351s: kubelet not initialised
	I0116 03:44:37.975671  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:39.979781  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:37.524763  507257 crio.go:444] Took 2.112278 seconds to copy over tarball
	I0116 03:44:37.524843  507257 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0116 03:44:40.706515  507257 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.181629969s)
	I0116 03:44:40.706559  507257 crio.go:451] Took 3.181765 seconds to extract the tarball
	I0116 03:44:40.706574  507257 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0116 03:44:40.751207  507257 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 03:44:40.905548  507257 crio.go:496] all images are preloaded for cri-o runtime.
	I0116 03:44:40.905578  507257 cache_images.go:84] Images are preloaded, skipping loading
	I0116 03:44:40.905659  507257 ssh_runner.go:195] Run: crio config
	I0116 03:44:40.965159  507257 cni.go:84] Creating CNI manager for ""
	I0116 03:44:40.965194  507257 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:44:40.965220  507257 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 03:44:40.965263  507257 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.159 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-615980 NodeName:embed-certs-615980 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.159"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.159 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0116 03:44:40.965474  507257 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.159
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-615980"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.159
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.159"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 03:44:40.965578  507257 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-615980 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.159
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-615980 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0116 03:44:40.965634  507257 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0116 03:44:40.976015  507257 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 03:44:40.976153  507257 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0116 03:44:40.986610  507257 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0116 03:44:41.005297  507257 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0116 03:44:41.026383  507257 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0116 03:44:41.046554  507257 ssh_runner.go:195] Run: grep 192.168.72.159	control-plane.minikube.internal$ /etc/hosts
	I0116 03:44:41.050940  507257 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.159	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 03:44:41.064516  507257 certs.go:56] Setting up /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/embed-certs-615980 for IP: 192.168.72.159
	I0116 03:44:41.064568  507257 certs.go:190] acquiring lock for shared ca certs: {Name:mkab5aeea3e0ed69f3592dc8f418e07c6075b130 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:44:41.064749  507257 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17965-468241/.minikube/ca.key
	I0116 03:44:41.064813  507257 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.key
	I0116 03:44:41.064917  507257 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/embed-certs-615980/client.key
	I0116 03:44:41.064989  507257 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/embed-certs-615980/apiserver.key.fc98a751
	I0116 03:44:41.065044  507257 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/embed-certs-615980/proxy-client.key
	I0116 03:44:41.065202  507257 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478.pem (1338 bytes)
	W0116 03:44:41.065241  507257 certs.go:433] ignoring /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478_empty.pem, impossibly tiny 0 bytes
	I0116 03:44:41.065257  507257 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca-key.pem (1679 bytes)
	I0116 03:44:41.065294  507257 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem (1078 bytes)
	I0116 03:44:41.065331  507257 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem (1123 bytes)
	I0116 03:44:41.065374  507257 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem (1679 bytes)
	I0116 03:44:41.065432  507257 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem (1708 bytes)
	I0116 03:44:41.066147  507257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/embed-certs-615980/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0116 03:44:41.092714  507257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/embed-certs-615980/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0116 03:44:41.119109  507257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/embed-certs-615980/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0116 03:44:41.147059  507257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/embed-certs-615980/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0116 03:44:41.176357  507257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 03:44:41.202082  507257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0116 03:44:41.228263  507257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 03:44:41.252892  507257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 03:44:39.263119  507889 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 03:44:39.290175  507889 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 03:44:39.319009  507889 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 03:44:39.341195  507889 system_pods.go:59] 9 kube-system pods found
	I0116 03:44:39.341251  507889 system_pods.go:61] "coredns-5dd5756b68-f8shl" [18bddcd6-4305-4856-b590-e16c362768e3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0116 03:44:39.341264  507889 system_pods.go:61] "coredns-5dd5756b68-pmx8n" [9d3c9941-8938-4d4a-b6d8-9b542ca6f1ca] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0116 03:44:39.341280  507889 system_pods.go:61] "etcd-default-k8s-diff-port-434445" [a2327159-d034-4f62-a8f5-14062a41c75d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0116 03:44:39.341293  507889 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-434445" [75c5fc5e-86b1-42cf-bb5c-8a1f8b4e13db] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0116 03:44:39.341310  507889 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-434445" [2568dd4f-124d-4c4f-8651-3b89b2b54983] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0116 03:44:39.341323  507889 system_pods.go:61] "kube-proxy-dcbqg" [eba1f9bf-6aa7-40cd-b57c-745c2d0cc414] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0116 03:44:39.341335  507889 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-434445" [479952a1-8c3d-49b4-bd22-fef988e6b83d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0116 03:44:39.341353  507889 system_pods.go:61] "metrics-server-57f55c9bc5-894n2" [46e4892a-d026-4a9d-88bc-128e92848808] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:44:39.341369  507889 system_pods.go:61] "storage-provisioner" [16fd4585-3d75-40c3-a28d-4134375f4e3d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0116 03:44:39.341391  507889 system_pods.go:74] duration metric: took 22.354405ms to wait for pod list to return data ...
	I0116 03:44:39.341403  507889 node_conditions.go:102] verifying NodePressure condition ...
	I0116 03:44:39.349904  507889 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 03:44:39.349954  507889 node_conditions.go:123] node cpu capacity is 2
	I0116 03:44:39.349972  507889 node_conditions.go:105] duration metric: took 8.557095ms to run NodePressure ...
	I0116 03:44:39.350000  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:44:39.798882  507889 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0116 03:44:39.816480  507889 kubeadm.go:787] kubelet initialised
	I0116 03:44:39.816514  507889 kubeadm.go:788] duration metric: took 17.598017ms waiting for restarted kubelet to initialise ...
	I0116 03:44:39.816527  507889 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:44:39.834946  507889 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-f8shl" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:39.854785  507889 pod_ready.go:97] node "default-k8s-diff-port-434445" hosting pod "coredns-5dd5756b68-f8shl" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-434445" has status "Ready":"False"
	I0116 03:44:39.854832  507889 pod_ready.go:81] duration metric: took 19.846427ms waiting for pod "coredns-5dd5756b68-f8shl" in "kube-system" namespace to be "Ready" ...
	E0116 03:44:39.854846  507889 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-434445" hosting pod "coredns-5dd5756b68-f8shl" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-434445" has status "Ready":"False"
	I0116 03:44:39.854864  507889 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-pmx8n" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:39.888659  507889 pod_ready.go:97] node "default-k8s-diff-port-434445" hosting pod "coredns-5dd5756b68-pmx8n" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-434445" has status "Ready":"False"
	I0116 03:44:39.888703  507889 pod_ready.go:81] duration metric: took 33.827201ms waiting for pod "coredns-5dd5756b68-pmx8n" in "kube-system" namespace to be "Ready" ...
	E0116 03:44:39.888718  507889 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-434445" hosting pod "coredns-5dd5756b68-pmx8n" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-434445" has status "Ready":"False"
	I0116 03:44:39.888728  507889 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-434445" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:39.897638  507889 pod_ready.go:97] node "default-k8s-diff-port-434445" hosting pod "etcd-default-k8s-diff-port-434445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-434445" has status "Ready":"False"
	I0116 03:44:39.897674  507889 pod_ready.go:81] duration metric: took 8.927103ms waiting for pod "etcd-default-k8s-diff-port-434445" in "kube-system" namespace to be "Ready" ...
	E0116 03:44:39.897693  507889 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-434445" hosting pod "etcd-default-k8s-diff-port-434445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-434445" has status "Ready":"False"
	I0116 03:44:39.897701  507889 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-434445" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:39.919418  507889 pod_ready.go:97] node "default-k8s-diff-port-434445" hosting pod "kube-apiserver-default-k8s-diff-port-434445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-434445" has status "Ready":"False"
	I0116 03:44:39.919465  507889 pod_ready.go:81] duration metric: took 21.753159ms waiting for pod "kube-apiserver-default-k8s-diff-port-434445" in "kube-system" namespace to be "Ready" ...
	E0116 03:44:39.919495  507889 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-434445" hosting pod "kube-apiserver-default-k8s-diff-port-434445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-434445" has status "Ready":"False"
	I0116 03:44:39.919505  507889 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-434445" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:40.203370  507889 pod_ready.go:97] node "default-k8s-diff-port-434445" hosting pod "kube-controller-manager-default-k8s-diff-port-434445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-434445" has status "Ready":"False"
	I0116 03:44:40.203411  507889 pod_ready.go:81] duration metric: took 283.893646ms waiting for pod "kube-controller-manager-default-k8s-diff-port-434445" in "kube-system" namespace to be "Ready" ...
	E0116 03:44:40.203428  507889 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-434445" hosting pod "kube-controller-manager-default-k8s-diff-port-434445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-434445" has status "Ready":"False"
	I0116 03:44:40.203440  507889 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-dcbqg" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:41.417889  507889 pod_ready.go:97] node "default-k8s-diff-port-434445" hosting pod "kube-proxy-dcbqg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-434445" has status "Ready":"False"
	I0116 03:44:41.418011  507889 pod_ready.go:81] duration metric: took 1.214559235s waiting for pod "kube-proxy-dcbqg" in "kube-system" namespace to be "Ready" ...
	E0116 03:44:41.418033  507889 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-434445" hosting pod "kube-proxy-dcbqg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-434445" has status "Ready":"False"
	I0116 03:44:41.418043  507889 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-434445" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:41.425177  507889 pod_ready.go:97] node "default-k8s-diff-port-434445" hosting pod "kube-scheduler-default-k8s-diff-port-434445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-434445" has status "Ready":"False"
	I0116 03:44:41.425208  507889 pod_ready.go:81] duration metric: took 7.15251ms waiting for pod "kube-scheduler-default-k8s-diff-port-434445" in "kube-system" namespace to be "Ready" ...
	E0116 03:44:41.425220  507889 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-434445" hosting pod "kube-scheduler-default-k8s-diff-port-434445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-434445" has status "Ready":"False"
	I0116 03:44:41.425226  507889 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:41.431059  507889 pod_ready.go:97] node "default-k8s-diff-port-434445" hosting pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-434445" has status "Ready":"False"
	I0116 03:44:41.431103  507889 pod_ready.go:81] duration metric: took 5.869165ms waiting for pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace to be "Ready" ...
	E0116 03:44:41.431115  507889 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-434445" hosting pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-434445" has status "Ready":"False"
	I0116 03:44:41.431122  507889 pod_ready.go:38] duration metric: took 1.614582832s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:44:41.431139  507889 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0116 03:44:41.445099  507889 ops.go:34] apiserver oom_adj: -16
	I0116 03:44:41.445129  507889 kubeadm.go:640] restartCluster took 21.83447374s
	I0116 03:44:41.445141  507889 kubeadm.go:406] StartCluster complete in 21.896543184s
	I0116 03:44:41.445168  507889 settings.go:142] acquiring lock: {Name:mk3ded99c06d285b174e76622bb0741c8dd2d8fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:44:41.445265  507889 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17965-468241/kubeconfig
	I0116 03:44:41.447590  507889 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-468241/kubeconfig: {Name:mkac3c63bbcba9b4fa1bea3480797757d703fb9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:44:41.544520  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0116 03:44:41.544743  507889 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0116 03:44:41.544842  507889 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-434445"
	I0116 03:44:41.544858  507889 config.go:182] Loaded profile config "default-k8s-diff-port-434445": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 03:44:41.544875  507889 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-434445"
	I0116 03:44:41.544891  507889 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-434445"
	W0116 03:44:41.544899  507889 addons.go:243] addon metrics-server should already be in state true
	I0116 03:44:41.544865  507889 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-434445"
	W0116 03:44:41.544915  507889 addons.go:243] addon storage-provisioner should already be in state true
	I0116 03:44:41.544971  507889 host.go:66] Checking if "default-k8s-diff-port-434445" exists ...
	I0116 03:44:41.544973  507889 host.go:66] Checking if "default-k8s-diff-port-434445" exists ...
	I0116 03:44:41.544862  507889 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-434445"
	I0116 03:44:41.545107  507889 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-434445"
	I0116 03:44:41.545473  507889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:44:41.545479  507889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:44:41.545505  507889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:44:41.545673  507889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:44:41.562983  507889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44387
	I0116 03:44:41.562984  507889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35715
	I0116 03:44:41.563677  507889 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:44:41.563684  507889 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:44:41.564352  507889 main.go:141] libmachine: Using API Version  1
	I0116 03:44:41.564382  507889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:44:41.564540  507889 main.go:141] libmachine: Using API Version  1
	I0116 03:44:41.564569  507889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:44:41.564753  507889 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:44:41.564937  507889 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:44:41.565113  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetState
	I0116 03:44:41.565350  507889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:44:41.565418  507889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:44:41.569050  507889 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-434445"
	W0116 03:44:41.569091  507889 addons.go:243] addon default-storageclass should already be in state true
	I0116 03:44:41.569125  507889 host.go:66] Checking if "default-k8s-diff-port-434445" exists ...
	I0116 03:44:41.569554  507889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:44:41.569613  507889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:44:41.584107  507889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33349
	I0116 03:44:41.584756  507889 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:44:41.585422  507889 main.go:141] libmachine: Using API Version  1
	I0116 03:44:41.585457  507889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:44:41.585634  507889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42645
	I0116 03:44:41.585856  507889 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:44:41.586123  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetState
	I0116 03:44:41.586162  507889 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:44:41.586636  507889 main.go:141] libmachine: Using API Version  1
	I0116 03:44:41.586663  507889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:44:41.587080  507889 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:44:41.587688  507889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:44:41.587743  507889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:44:41.588214  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .DriverName
	I0116 03:44:41.606456  507889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40289
	I0116 03:44:41.644090  507889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:44:41.819945  507889 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:44:41.929214  507889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:44:41.929680  507889 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:44:42.246642  507889 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 03:44:42.246665  507889 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0116 03:44:42.246696  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHHostname
	I0116 03:44:42.247294  507889 main.go:141] libmachine: Using API Version  1
	I0116 03:44:42.247332  507889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:44:42.247740  507889 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:44:42.247987  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetState
	I0116 03:44:42.250254  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .DriverName
	I0116 03:44:42.250570  507889 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0116 03:44:42.250588  507889 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0116 03:44:42.250609  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHHostname
	I0116 03:44:42.251130  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:42.251863  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ea:d5", ip: ""} in network mk-default-k8s-diff-port-434445: {Iface:virbr1 ExpiryTime:2024-01-16 04:44:01 +0000 UTC Type:0 Mac:52:54:00:78:ea:d5 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:default-k8s-diff-port-434445 Clientid:01:52:54:00:78:ea:d5}
	I0116 03:44:42.251896  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined IP address 192.168.50.236 and MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:42.252245  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHPort
	I0116 03:44:42.252473  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHKeyPath
	I0116 03:44:42.252680  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHUsername
	I0116 03:44:42.252842  507889 sshutil.go:53] new ssh client: &{IP:192.168.50.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/default-k8s-diff-port-434445/id_rsa Username:docker}
	I0116 03:44:42.254224  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:42.254837  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ea:d5", ip: ""} in network mk-default-k8s-diff-port-434445: {Iface:virbr1 ExpiryTime:2024-01-16 04:44:01 +0000 UTC Type:0 Mac:52:54:00:78:ea:d5 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:default-k8s-diff-port-434445 Clientid:01:52:54:00:78:ea:d5}
	I0116 03:44:42.254872  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined IP address 192.168.50.236 and MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:42.255050  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHPort
	I0116 03:44:42.255240  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHKeyPath
	I0116 03:44:42.255422  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHUsername
	I0116 03:44:42.255585  507889 sshutil.go:53] new ssh client: &{IP:192.168.50.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/default-k8s-diff-port-434445/id_rsa Username:docker}
	I0116 03:44:42.264367  507889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36555
	I0116 03:44:42.264832  507889 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:44:42.265322  507889 main.go:141] libmachine: Using API Version  1
	I0116 03:44:42.265352  507889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:44:42.265700  507889 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:44:42.266266  507889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:44:42.266306  507889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:44:42.281852  507889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32929
	I0116 03:44:42.282351  507889 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:44:42.282914  507889 main.go:141] libmachine: Using API Version  1
	I0116 03:44:42.282944  507889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:44:42.283363  507889 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:44:42.283599  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetState
	I0116 03:44:42.285584  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .DriverName
	I0116 03:44:42.395709  507889 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0116 03:44:42.398672  507889 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 03:44:42.493544  507889 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0116 03:44:42.531626  507889 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0116 03:44:42.531683  507889 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0116 03:44:42.531717  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHHostname
	I0116 03:44:42.535980  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:42.536575  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ea:d5", ip: ""} in network mk-default-k8s-diff-port-434445: {Iface:virbr1 ExpiryTime:2024-01-16 04:44:01 +0000 UTC Type:0 Mac:52:54:00:78:ea:d5 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:default-k8s-diff-port-434445 Clientid:01:52:54:00:78:ea:d5}
	I0116 03:44:42.536604  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined IP address 192.168.50.236 and MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:42.537018  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHPort
	I0116 03:44:42.537286  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHKeyPath
	I0116 03:44:42.537510  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHUsername
	I0116 03:44:42.537850  507889 sshutil.go:53] new ssh client: &{IP:192.168.50.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/default-k8s-diff-port-434445/id_rsa Username:docker}
	I0116 03:44:42.545910  507889 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.001352094s)
	I0116 03:44:42.545983  507889 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0116 03:44:42.713693  507889 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0116 03:44:42.713718  507889 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0116 03:44:42.752674  507889 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0116 03:44:42.752717  507889 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0116 03:44:42.790178  507889 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 03:44:42.790214  507889 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0116 03:44:42.825256  507889 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 03:44:43.010741  507889 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-434445" context rescaled to 1 replicas
	I0116 03:44:43.010801  507889 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.236 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 03:44:43.014031  507889 out.go:177] * Verifying Kubernetes components...
	I0116 03:44:43.016143  507889 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:44:44.415462  507889 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.921726194s)
	I0116 03:44:44.415532  507889 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.921908068s)
	I0116 03:44:44.415547  507889 main.go:141] libmachine: Making call to close driver server
	I0116 03:44:44.415631  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .Close
	I0116 03:44:44.415579  507889 main.go:141] libmachine: Making call to close driver server
	I0116 03:44:44.415854  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .Close
	I0116 03:44:44.416266  507889 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:44:44.416376  507889 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:44:44.416393  507889 main.go:141] libmachine: Making call to close driver server
	I0116 03:44:44.416424  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .Close
	I0116 03:44:44.416310  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | Closing plugin on server side
	I0116 03:44:44.416310  507889 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:44:44.416595  507889 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:44:44.416658  507889 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:44:44.416671  507889 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:44:44.416977  507889 main.go:141] libmachine: Making call to close driver server
	I0116 03:44:44.417014  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .Close
	I0116 03:44:44.416332  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | Closing plugin on server side
	I0116 03:44:44.417305  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | Closing plugin on server side
	I0116 03:44:44.417358  507889 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:44:44.417375  507889 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:44:44.450870  507889 main.go:141] libmachine: Making call to close driver server
	I0116 03:44:44.450908  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .Close
	I0116 03:44:44.451327  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | Closing plugin on server side
	I0116 03:44:44.451367  507889 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:44:44.451378  507889 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:44:44.496654  507889 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.671338305s)
	I0116 03:44:44.496732  507889 main.go:141] libmachine: Making call to close driver server
	I0116 03:44:44.496744  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .Close
	I0116 03:44:44.496678  507889 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.480503621s)
	I0116 03:44:44.496845  507889 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-434445" to be "Ready" ...
	I0116 03:44:44.497092  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | Closing plugin on server side
	I0116 03:44:44.497088  507889 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:44:44.497166  507889 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:44:44.497188  507889 main.go:141] libmachine: Making call to close driver server
	I0116 03:44:44.497198  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .Close
	I0116 03:44:44.497445  507889 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:44:44.497489  507889 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:44:44.497499  507889 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-434445"
	I0116 03:44:44.497502  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | Closing plugin on server side
	I0116 03:44:44.500234  507889 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0116 03:44:42.355473  507510 retry.go:31] will retry after 6.423018357s: kubelet not initialised
	I0116 03:44:42.543045  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:44.974520  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:41.280410  507257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem --> /usr/share/ca-certificates/4754782.pem (1708 bytes)
	I0116 03:44:41.488388  507257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 03:44:41.515741  507257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478.pem --> /usr/share/ca-certificates/475478.pem (1338 bytes)
	I0116 03:44:41.541744  507257 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0116 03:44:41.564056  507257 ssh_runner.go:195] Run: openssl version
	I0116 03:44:41.571197  507257 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 03:44:41.586430  507257 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:44:41.592334  507257 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 02:35 /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:44:41.592405  507257 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:44:41.599013  507257 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 03:44:41.612793  507257 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/475478.pem && ln -fs /usr/share/ca-certificates/475478.pem /etc/ssl/certs/475478.pem"
	I0116 03:44:41.624554  507257 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/475478.pem
	I0116 03:44:41.629558  507257 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 02:44 /usr/share/ca-certificates/475478.pem
	I0116 03:44:41.629643  507257 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/475478.pem
	I0116 03:44:41.635518  507257 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/475478.pem /etc/ssl/certs/51391683.0"
	I0116 03:44:41.649567  507257 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4754782.pem && ln -fs /usr/share/ca-certificates/4754782.pem /etc/ssl/certs/4754782.pem"
	I0116 03:44:41.662276  507257 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4754782.pem
	I0116 03:44:41.667618  507257 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 02:44 /usr/share/ca-certificates/4754782.pem
	I0116 03:44:41.667699  507257 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4754782.pem
	I0116 03:44:41.678158  507257 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4754782.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 03:44:41.692147  507257 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 03:44:41.698226  507257 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0116 03:44:41.706563  507257 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0116 03:44:41.713387  507257 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0116 03:44:41.721243  507257 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0116 03:44:41.728346  507257 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0116 03:44:41.735446  507257 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0116 03:44:41.743670  507257 kubeadm.go:404] StartCluster: {Name:embed-certs-615980 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-615980 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.159 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 03:44:41.743786  507257 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0116 03:44:41.743860  507257 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 03:44:41.799605  507257 cri.go:89] found id: ""
	I0116 03:44:41.799700  507257 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0116 03:44:41.812356  507257 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0116 03:44:41.812388  507257 kubeadm.go:636] restartCluster start
	I0116 03:44:41.812457  507257 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0116 03:44:41.823906  507257 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:41.825131  507257 kubeconfig.go:92] found "embed-certs-615980" server: "https://192.168.72.159:8443"
	I0116 03:44:41.827484  507257 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0116 03:44:41.838289  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:41.838386  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:41.852927  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:42.338430  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:42.338548  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:42.353029  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:42.838419  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:42.838526  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:42.854254  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:43.338802  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:43.338934  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:43.356427  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:43.839009  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:43.839103  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:43.853265  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:44.338711  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:44.338803  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:44.353364  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:44.838956  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:44.839070  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:44.851711  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:45.339282  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:45.339397  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:45.354275  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:45.838803  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:45.838899  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:45.853557  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:44.501958  507889 addons.go:505] enable addons completed in 2.957229306s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0116 03:44:46.502807  507889 node_ready.go:58] node "default-k8s-diff-port-434445" has status "Ready":"False"
	I0116 03:44:48.786485  507510 retry.go:31] will retry after 18.441149821s: kubelet not initialised
	I0116 03:44:46.975660  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:48.981964  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:46.339198  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:46.339328  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:46.356092  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:46.839356  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:46.839461  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:46.857070  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:47.338405  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:47.338546  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:47.354976  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:47.839369  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:47.839468  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:47.854465  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:48.339102  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:48.339217  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:48.352361  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:48.838853  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:48.838968  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:48.853271  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:49.338643  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:49.338751  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:49.353674  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:49.839214  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:49.839309  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:49.852699  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:50.339060  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:50.339186  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:50.353143  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:50.838646  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:50.838782  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:50.852767  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:48.005726  507889 node_ready.go:49] node "default-k8s-diff-port-434445" has status "Ready":"True"
	I0116 03:44:48.005760  507889 node_ready.go:38] duration metric: took 3.508890685s waiting for node "default-k8s-diff-port-434445" to be "Ready" ...
	I0116 03:44:48.005775  507889 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:44:48.015385  507889 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-f8shl" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:48.027358  507889 pod_ready.go:92] pod "coredns-5dd5756b68-f8shl" in "kube-system" namespace has status "Ready":"True"
	I0116 03:44:48.027383  507889 pod_ready.go:81] duration metric: took 11.966322ms waiting for pod "coredns-5dd5756b68-f8shl" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:48.027397  507889 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-pmx8n" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:48.034156  507889 pod_ready.go:92] pod "coredns-5dd5756b68-pmx8n" in "kube-system" namespace has status "Ready":"True"
	I0116 03:44:48.034179  507889 pod_ready.go:81] duration metric: took 6.775784ms waiting for pod "coredns-5dd5756b68-pmx8n" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:48.034188  507889 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-434445" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:48.039933  507889 pod_ready.go:92] pod "etcd-default-k8s-diff-port-434445" in "kube-system" namespace has status "Ready":"True"
	I0116 03:44:48.039954  507889 pod_ready.go:81] duration metric: took 5.758946ms waiting for pod "etcd-default-k8s-diff-port-434445" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:48.039964  507889 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-434445" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:48.045351  507889 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-434445" in "kube-system" namespace has status "Ready":"True"
	I0116 03:44:48.045376  507889 pod_ready.go:81] duration metric: took 5.405684ms waiting for pod "kube-apiserver-default-k8s-diff-port-434445" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:48.045386  507889 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-434445" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:48.413479  507889 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-434445" in "kube-system" namespace has status "Ready":"True"
	I0116 03:44:48.413508  507889 pod_ready.go:81] duration metric: took 368.114361ms waiting for pod "kube-controller-manager-default-k8s-diff-port-434445" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:48.413522  507889 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dcbqg" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:48.808095  507889 pod_ready.go:92] pod "kube-proxy-dcbqg" in "kube-system" namespace has status "Ready":"True"
	I0116 03:44:48.808132  507889 pod_ready.go:81] duration metric: took 394.600854ms waiting for pod "kube-proxy-dcbqg" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:48.808147  507889 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-434445" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:50.817248  507889 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-434445" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:51.474904  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:53.475529  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:55.475807  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:51.339105  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:51.339225  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:51.352821  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:51.838856  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:51.838985  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:51.852211  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:51.852258  507257 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0116 03:44:51.852271  507257 kubeadm.go:1135] stopping kube-system containers ...
	I0116 03:44:51.852289  507257 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0116 03:44:51.852360  507257 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 03:44:51.897049  507257 cri.go:89] found id: ""
	I0116 03:44:51.897139  507257 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0116 03:44:51.915124  507257 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 03:44:51.926221  507257 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 03:44:51.926311  507257 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 03:44:51.938314  507257 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0116 03:44:51.938358  507257 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:44:52.077173  507257 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:44:52.733999  507257 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:44:52.971172  507257 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:44:53.063705  507257 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:44:53.200256  507257 api_server.go:52] waiting for apiserver process to appear ...
	I0116 03:44:53.200364  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:44:53.701337  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:44:54.201266  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:44:54.700485  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:44:55.200720  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:44:55.701348  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:44:55.725792  507257 api_server.go:72] duration metric: took 2.52553608s to wait for apiserver process to appear ...
	I0116 03:44:55.725826  507257 api_server.go:88] waiting for apiserver healthz status ...
	I0116 03:44:55.725851  507257 api_server.go:253] Checking apiserver healthz at https://192.168.72.159:8443/healthz ...
	I0116 03:44:52.317689  507889 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-434445" in "kube-system" namespace has status "Ready":"True"
	I0116 03:44:52.317718  507889 pod_ready.go:81] duration metric: took 3.509561404s waiting for pod "kube-scheduler-default-k8s-diff-port-434445" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:52.317731  507889 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:54.326412  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:56.327634  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:57.974017  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:59.977499  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:59.850423  507257 api_server.go:279] https://192.168.72.159:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 03:44:59.850456  507257 api_server.go:103] status: https://192.168.72.159:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 03:44:59.850471  507257 api_server.go:253] Checking apiserver healthz at https://192.168.72.159:8443/healthz ...
	I0116 03:44:59.998251  507257 api_server.go:279] https://192.168.72.159:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 03:44:59.998310  507257 api_server.go:103] status: https://192.168.72.159:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 03:45:00.226594  507257 api_server.go:253] Checking apiserver healthz at https://192.168.72.159:8443/healthz ...
	I0116 03:45:00.233826  507257 api_server.go:279] https://192.168.72.159:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 03:45:00.233876  507257 api_server.go:103] status: https://192.168.72.159:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 03:45:00.726919  507257 api_server.go:253] Checking apiserver healthz at https://192.168.72.159:8443/healthz ...
	I0116 03:45:00.732711  507257 api_server.go:279] https://192.168.72.159:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 03:45:00.732748  507257 api_server.go:103] status: https://192.168.72.159:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 03:45:01.226693  507257 api_server.go:253] Checking apiserver healthz at https://192.168.72.159:8443/healthz ...
	I0116 03:45:01.232420  507257 api_server.go:279] https://192.168.72.159:8443/healthz returned 200:
	ok
	I0116 03:45:01.242029  507257 api_server.go:141] control plane version: v1.28.4
	I0116 03:45:01.242078  507257 api_server.go:131] duration metric: took 5.516243097s to wait for apiserver health ...
	I0116 03:45:01.242092  507257 cni.go:84] Creating CNI manager for ""
	I0116 03:45:01.242101  507257 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:45:01.244395  507257 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 03:45:01.246155  507257 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 03:44:58.827760  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:01.327190  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:02.475858  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:04.974991  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:01.270205  507257 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 03:45:01.350402  507257 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 03:45:01.384475  507257 system_pods.go:59] 8 kube-system pods found
	I0116 03:45:01.384536  507257 system_pods.go:61] "coredns-5dd5756b68-ddjkl" [fe342d2a-7d12-4b37-be29-c0d77b920964] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0116 03:45:01.384549  507257 system_pods.go:61] "etcd-embed-certs-615980" [7b7af2e1-b3bb-4c47-862b-838167453939] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0116 03:45:01.384562  507257 system_pods.go:61] "kube-apiserver-embed-certs-615980" [bb883c31-8391-467f-9b4a-affb05a56d49] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0116 03:45:01.384571  507257 system_pods.go:61] "kube-controller-manager-embed-certs-615980" [74f7c5e3-818c-4e15-b693-d4f81308bf9d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0116 03:45:01.384584  507257 system_pods.go:61] "kube-proxy-6jpr7" [e62c9202-8b4f-4fe7-8aa4-b931afd4b028] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0116 03:45:01.384602  507257 system_pods.go:61] "kube-scheduler-embed-certs-615980" [f03d5c9c-af6a-437b-92bb-7c5a46259bbd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0116 03:45:01.384618  507257 system_pods.go:61] "metrics-server-57f55c9bc5-48gnw" [1fcb32b6-f985-428d-8f02-1198d704d8c7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:45:01.384632  507257 system_pods.go:61] "storage-provisioner" [6264adaa-89e8-4f0d-9394-d7325338a2f5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0116 03:45:01.384642  507257 system_pods.go:74] duration metric: took 34.114711ms to wait for pod list to return data ...
	I0116 03:45:01.384656  507257 node_conditions.go:102] verifying NodePressure condition ...
	I0116 03:45:01.392555  507257 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 03:45:01.392597  507257 node_conditions.go:123] node cpu capacity is 2
	I0116 03:45:01.392614  507257 node_conditions.go:105] duration metric: took 7.946538ms to run NodePressure ...
	I0116 03:45:01.392644  507257 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:45:01.788178  507257 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0116 03:45:01.795913  507257 kubeadm.go:787] kubelet initialised
	I0116 03:45:01.795945  507257 kubeadm.go:788] duration metric: took 7.737644ms waiting for restarted kubelet to initialise ...
	I0116 03:45:01.795955  507257 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:45:01.806433  507257 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-ddjkl" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:03.815645  507257 pod_ready.go:102] pod "coredns-5dd5756b68-ddjkl" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:05.821193  507257 pod_ready.go:92] pod "coredns-5dd5756b68-ddjkl" in "kube-system" namespace has status "Ready":"True"
	I0116 03:45:05.821231  507257 pod_ready.go:81] duration metric: took 4.014760393s waiting for pod "coredns-5dd5756b68-ddjkl" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:05.821245  507257 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-615980" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:03.825695  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:05.826742  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:07.234109  507510 kubeadm.go:787] kubelet initialised
	I0116 03:45:07.234137  507510 kubeadm.go:788] duration metric: took 45.657540747s waiting for restarted kubelet to initialise ...
	I0116 03:45:07.234145  507510 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:45:07.239858  507510 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-7q4wc" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:07.247210  507510 pod_ready.go:92] pod "coredns-5644d7b6d9-7q4wc" in "kube-system" namespace has status "Ready":"True"
	I0116 03:45:07.247237  507510 pod_ready.go:81] duration metric: took 7.336988ms waiting for pod "coredns-5644d7b6d9-7q4wc" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:07.247249  507510 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-q2gxq" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:07.252865  507510 pod_ready.go:92] pod "coredns-5644d7b6d9-q2gxq" in "kube-system" namespace has status "Ready":"True"
	I0116 03:45:07.252900  507510 pod_ready.go:81] duration metric: took 5.642204ms waiting for pod "coredns-5644d7b6d9-q2gxq" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:07.252925  507510 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-696770" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:07.259169  507510 pod_ready.go:92] pod "etcd-old-k8s-version-696770" in "kube-system" namespace has status "Ready":"True"
	I0116 03:45:07.259193  507510 pod_ready.go:81] duration metric: took 6.260142ms waiting for pod "etcd-old-k8s-version-696770" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:07.259202  507510 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-696770" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:07.264591  507510 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-696770" in "kube-system" namespace has status "Ready":"True"
	I0116 03:45:07.264622  507510 pod_ready.go:81] duration metric: took 5.411866ms waiting for pod "kube-apiserver-old-k8s-version-696770" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:07.264635  507510 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-696770" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:07.632057  507510 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-696770" in "kube-system" namespace has status "Ready":"True"
	I0116 03:45:07.632093  507510 pod_ready.go:81] duration metric: took 367.447202ms waiting for pod "kube-controller-manager-old-k8s-version-696770" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:07.632110  507510 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-9pfdj" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:08.033002  507510 pod_ready.go:92] pod "kube-proxy-9pfdj" in "kube-system" namespace has status "Ready":"True"
	I0116 03:45:08.033028  507510 pod_ready.go:81] duration metric: took 400.910907ms waiting for pod "kube-proxy-9pfdj" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:08.033038  507510 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-696770" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:08.433134  507510 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-696770" in "kube-system" namespace has status "Ready":"True"
	I0116 03:45:08.433165  507510 pod_ready.go:81] duration metric: took 400.1203ms waiting for pod "kube-scheduler-old-k8s-version-696770" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:08.433180  507510 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:07.485372  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:09.979593  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:07.830703  507257 pod_ready.go:102] pod "etcd-embed-certs-615980" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:10.328466  507257 pod_ready.go:102] pod "etcd-embed-certs-615980" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:08.325925  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:10.825155  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:10.442598  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:12.941713  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:12.478975  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:14.480154  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:12.329199  507257 pod_ready.go:102] pod "etcd-embed-certs-615980" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:13.830177  507257 pod_ready.go:92] pod "etcd-embed-certs-615980" in "kube-system" namespace has status "Ready":"True"
	I0116 03:45:13.830207  507257 pod_ready.go:81] duration metric: took 8.008954008s waiting for pod "etcd-embed-certs-615980" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:13.830217  507257 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-615980" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:13.837420  507257 pod_ready.go:92] pod "kube-apiserver-embed-certs-615980" in "kube-system" namespace has status "Ready":"True"
	I0116 03:45:13.837448  507257 pod_ready.go:81] duration metric: took 7.22323ms waiting for pod "kube-apiserver-embed-certs-615980" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:13.837461  507257 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-615980" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:13.845996  507257 pod_ready.go:92] pod "kube-controller-manager-embed-certs-615980" in "kube-system" namespace has status "Ready":"True"
	I0116 03:45:13.846029  507257 pod_ready.go:81] duration metric: took 8.558317ms waiting for pod "kube-controller-manager-embed-certs-615980" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:13.846040  507257 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-6jpr7" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:13.852645  507257 pod_ready.go:92] pod "kube-proxy-6jpr7" in "kube-system" namespace has status "Ready":"True"
	I0116 03:45:13.852674  507257 pod_ready.go:81] duration metric: took 6.627181ms waiting for pod "kube-proxy-6jpr7" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:13.852683  507257 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-615980" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:13.858818  507257 pod_ready.go:92] pod "kube-scheduler-embed-certs-615980" in "kube-system" namespace has status "Ready":"True"
	I0116 03:45:13.858844  507257 pod_ready.go:81] duration metric: took 6.154319ms waiting for pod "kube-scheduler-embed-certs-615980" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:13.858853  507257 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:15.867133  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:12.826463  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:14.826507  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:14.942079  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:17.442566  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:16.976095  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:19.477899  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:17.868381  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:20.367064  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:17.326184  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:19.328194  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:19.942113  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:21.942853  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:24.441140  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:21.975337  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:24.474400  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:22.368008  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:24.866716  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:21.825428  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:23.825828  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:25.829356  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:26.441756  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:28.443869  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:26.475939  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:28.476308  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:26.866760  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:29.367575  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:28.326756  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:30.825813  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:30.942631  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:33.440480  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:30.975870  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:33.475828  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:31.866401  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:33.867719  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:33.325388  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:35.325485  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:35.939804  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:37.940883  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:35.974504  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:37.975857  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:39.977413  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:36.367513  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:38.865702  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:40.866834  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:37.325804  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:39.326635  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:40.440287  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:42.440838  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:44.441037  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:42.475940  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:44.981122  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:42.867673  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:45.368285  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:41.825982  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:43.826700  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:45.828002  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:46.443286  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:48.941625  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:47.474621  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:49.475149  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:47.867135  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:49.867865  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:48.326035  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:50.327538  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:50.943718  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:53.443986  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:51.977212  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:54.477161  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:52.368444  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:54.375089  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:52.826163  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:55.327160  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:55.940561  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:57.942988  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:56.975470  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:58.975829  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:56.867648  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:59.367479  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:57.826140  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:59.826286  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:00.440963  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:02.941202  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:00.979308  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:03.474099  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:05.478535  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:01.868806  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:04.368227  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:01.826702  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:04.325060  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:06.326882  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:05.441837  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:07.444944  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:07.975344  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:09.975486  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:06.868137  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:09.367752  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:08.329967  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:10.826182  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:09.940745  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:11.942989  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:14.441331  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:11.977171  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:13.977835  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:11.866817  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:13.867951  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:13.327232  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:15.826862  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:16.442525  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:18.442754  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:16.475367  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:18.475903  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:16.367830  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:18.368100  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:20.866302  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:18.326376  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:20.827236  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:20.940998  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:22.941332  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:20.980371  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:23.476451  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:22.868945  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:25.366857  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:23.326576  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:25.826000  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:25.442029  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:27.941061  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:25.974860  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:27.975178  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:29.978092  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:27.370097  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:29.869827  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:28.326735  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:30.826672  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:30.442579  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:32.941784  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:32.475984  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:34.973934  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:31.870772  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:34.367380  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:32.827910  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:34.828185  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:35.440418  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:37.441206  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:39.441254  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:36.974076  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:38.975169  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:36.867231  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:39.366005  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:37.327553  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:39.826218  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:41.941046  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:43.941530  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:40.976023  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:43.478194  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:41.367293  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:43.867097  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:45.867843  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:42.325426  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:44.325723  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:46.326155  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:46.441175  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:48.940677  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:45.974937  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:47.975141  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:50.474687  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:47.868006  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:49.868890  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:48.326634  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:50.326914  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:50.941220  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:53.440868  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:52.475138  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:54.475546  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:52.365917  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:54.366514  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:52.826279  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:55.324177  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:55.441130  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:57.943093  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:56.976380  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:59.478090  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:56.368894  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:58.868051  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:57.326296  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:59.326416  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:01.327894  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:00.440504  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:02.441176  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:04.442171  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:01.975498  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:03.978490  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:01.369736  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:03.871663  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:03.825943  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:05.828215  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:06.443721  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:08.940212  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:06.475354  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:08.975707  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:06.366468  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:08.366998  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:10.368019  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:08.326243  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:10.824873  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:10.942042  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:13.440495  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:11.475551  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:13.475904  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:12.867030  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:14.872409  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:12.826040  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:15.325658  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:15.941844  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:18.440574  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:15.975125  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:17.977326  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:20.474897  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:17.367390  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:19.369090  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:17.325860  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:19.829310  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:20.940407  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:22.941824  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:22.475218  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:24.477773  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:21.866953  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:23.867055  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:22.326660  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:24.327689  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:25.441214  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:27.442253  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:26.975120  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:29.477805  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:26.367295  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:28.867376  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:26.826666  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:29.327606  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:29.940650  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:31.941021  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:34.443144  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:31.978544  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:34.475301  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:31.367770  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:33.867084  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:35.870968  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:31.826565  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:34.326677  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:36.941363  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:38.942121  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:36.974797  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:38.975027  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:38.368025  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:40.866714  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:36.828347  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:39.327130  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:41.441555  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:43.442806  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:40.977172  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:43.476163  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:43.367966  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:45.867460  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:41.826087  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:43.826389  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:46.326497  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:45.941267  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:48.443875  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:45.974452  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:47.977610  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:50.475536  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:48.367053  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:50.368023  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:48.824924  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:50.825835  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:50.941125  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:52.941644  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:52.975726  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:55.476453  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:52.866871  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:55.367951  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:52.826166  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:54.826434  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:55.442084  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:57.442829  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:57.974382  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:59.974448  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:57.867742  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:00.366490  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:57.325608  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:59.825525  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:59.939515  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:01.941648  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:03.942290  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:01.975159  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:03.977002  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:02.366764  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:04.366818  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:01.831740  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:04.326341  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:06.440494  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:08.940336  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:06.475364  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:08.482783  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:06.367160  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:08.867294  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:06.825331  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:08.826594  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:11.324828  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:10.942696  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:13.441805  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:10.974798  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:12.975009  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:14.976154  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:11.366189  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:13.369852  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:15.867536  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:13.327353  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:15.825738  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:15.941304  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:17.942206  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:17.474204  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:19.475630  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:19.974269  507339 pod_ready.go:81] duration metric: took 4m0.007375913s waiting for pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace to be "Ready" ...
	E0116 03:48:19.974299  507339 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0116 03:48:19.974310  507339 pod_ready.go:38] duration metric: took 4m6.26032663s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:48:19.974365  507339 api_server.go:52] waiting for apiserver process to appear ...
	I0116 03:48:19.974431  507339 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0116 03:48:19.974529  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0116 03:48:20.042853  507339 cri.go:89] found id: "de79f87bc28449077936386f603cc5df933f63455bc96cf6ff7e7c6e4bd32bc4"
	I0116 03:48:20.042886  507339 cri.go:89] found id: ""
	I0116 03:48:20.042896  507339 logs.go:284] 1 containers: [de79f87bc28449077936386f603cc5df933f63455bc96cf6ff7e7c6e4bd32bc4]
	I0116 03:48:20.042961  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:20.049795  507339 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0116 03:48:20.049884  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0116 03:48:20.092507  507339 cri.go:89] found id: "01aaf51cd40b9137f2395404bb15b7372927f81dc8a4e884600dc8cba9bbeb8e"
	I0116 03:48:20.092541  507339 cri.go:89] found id: ""
	I0116 03:48:20.092551  507339 logs.go:284] 1 containers: [01aaf51cd40b9137f2395404bb15b7372927f81dc8a4e884600dc8cba9bbeb8e]
	I0116 03:48:20.092619  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:20.097091  507339 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0116 03:48:20.097176  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0116 03:48:20.139182  507339 cri.go:89] found id: "c13ef036a1014a6bbc5c179a0889fb6ff615988713589da58240b7c637adf687"
	I0116 03:48:20.139218  507339 cri.go:89] found id: ""
	I0116 03:48:20.139229  507339 logs.go:284] 1 containers: [c13ef036a1014a6bbc5c179a0889fb6ff615988713589da58240b7c637adf687]
	I0116 03:48:20.139297  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:20.145129  507339 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0116 03:48:20.145210  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0116 03:48:20.191055  507339 cri.go:89] found id: "33381edd7dded1b5ed158738429b4a7f1f03a6171f2af0832bb5a9864c950725"
	I0116 03:48:20.191090  507339 cri.go:89] found id: ""
	I0116 03:48:20.191098  507339 logs.go:284] 1 containers: [33381edd7dded1b5ed158738429b4a7f1f03a6171f2af0832bb5a9864c950725]
	I0116 03:48:20.191161  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:20.195688  507339 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0116 03:48:20.195765  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0116 03:48:20.242718  507339 cri.go:89] found id: "eba2964f029ac61d9cfb59dc548ae05c832f9b23713f51924291fbfc5de985ed"
	I0116 03:48:20.242746  507339 cri.go:89] found id: ""
	I0116 03:48:20.242754  507339 logs.go:284] 1 containers: [eba2964f029ac61d9cfb59dc548ae05c832f9b23713f51924291fbfc5de985ed]
	I0116 03:48:20.242819  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:20.247312  507339 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0116 03:48:20.247399  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0116 03:48:20.287981  507339 cri.go:89] found id: "802d4c55aa04370ff079f09dfe4670f5dac1142ca6db880afa17be75f1d20a76"
	I0116 03:48:20.288009  507339 cri.go:89] found id: ""
	I0116 03:48:20.288018  507339 logs.go:284] 1 containers: [802d4c55aa04370ff079f09dfe4670f5dac1142ca6db880afa17be75f1d20a76]
	I0116 03:48:20.288097  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:20.292370  507339 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0116 03:48:20.292449  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0116 03:48:20.335778  507339 cri.go:89] found id: ""
	I0116 03:48:20.335816  507339 logs.go:284] 0 containers: []
	W0116 03:48:20.335828  507339 logs.go:286] No container was found matching "kindnet"
	I0116 03:48:20.335838  507339 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0116 03:48:20.335906  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0116 03:48:20.381698  507339 cri.go:89] found id: "b7164c1b7732cf6d54642cb9ffc8d9f16696adc0e428c3fa16cf914420d73e1d"
	I0116 03:48:20.381722  507339 cri.go:89] found id: "59754e94eb3cf3fecfedb9077fbad2ecc2618c351fa9bfa3faed0d780fdd9ecb"
	I0116 03:48:20.381727  507339 cri.go:89] found id: ""
	I0116 03:48:20.381734  507339 logs.go:284] 2 containers: [b7164c1b7732cf6d54642cb9ffc8d9f16696adc0e428c3fa16cf914420d73e1d 59754e94eb3cf3fecfedb9077fbad2ecc2618c351fa9bfa3faed0d780fdd9ecb]
	I0116 03:48:20.381790  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:20.386880  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:20.391292  507339 logs.go:123] Gathering logs for describe nodes ...
	I0116 03:48:20.391324  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0116 03:48:20.528154  507339 logs.go:123] Gathering logs for storage-provisioner [b7164c1b7732cf6d54642cb9ffc8d9f16696adc0e428c3fa16cf914420d73e1d] ...
	I0116 03:48:20.528197  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b7164c1b7732cf6d54642cb9ffc8d9f16696adc0e428c3fa16cf914420d73e1d"
	I0116 03:48:20.586645  507339 logs.go:123] Gathering logs for CRI-O ...
	I0116 03:48:20.586680  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0116 03:48:18.367415  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:20.867678  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:18.325849  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:20.326141  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:20.442138  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:22.442180  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:21.096109  507339 logs.go:123] Gathering logs for kubelet ...
	I0116 03:48:21.096155  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0116 03:48:21.154531  507339 logs.go:123] Gathering logs for coredns [c13ef036a1014a6bbc5c179a0889fb6ff615988713589da58240b7c637adf687] ...
	I0116 03:48:21.154577  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c13ef036a1014a6bbc5c179a0889fb6ff615988713589da58240b7c637adf687"
	I0116 03:48:21.203708  507339 logs.go:123] Gathering logs for dmesg ...
	I0116 03:48:21.203760  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0116 03:48:21.219320  507339 logs.go:123] Gathering logs for etcd [01aaf51cd40b9137f2395404bb15b7372927f81dc8a4e884600dc8cba9bbeb8e] ...
	I0116 03:48:21.219362  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 01aaf51cd40b9137f2395404bb15b7372927f81dc8a4e884600dc8cba9bbeb8e"
	I0116 03:48:21.271759  507339 logs.go:123] Gathering logs for kube-proxy [eba2964f029ac61d9cfb59dc548ae05c832f9b23713f51924291fbfc5de985ed] ...
	I0116 03:48:21.271812  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eba2964f029ac61d9cfb59dc548ae05c832f9b23713f51924291fbfc5de985ed"
	I0116 03:48:21.316786  507339 logs.go:123] Gathering logs for kube-controller-manager [802d4c55aa04370ff079f09dfe4670f5dac1142ca6db880afa17be75f1d20a76] ...
	I0116 03:48:21.316825  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 802d4c55aa04370ff079f09dfe4670f5dac1142ca6db880afa17be75f1d20a76"
	I0116 03:48:21.383743  507339 logs.go:123] Gathering logs for storage-provisioner [59754e94eb3cf3fecfedb9077fbad2ecc2618c351fa9bfa3faed0d780fdd9ecb] ...
	I0116 03:48:21.383783  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59754e94eb3cf3fecfedb9077fbad2ecc2618c351fa9bfa3faed0d780fdd9ecb"
	I0116 03:48:21.422893  507339 logs.go:123] Gathering logs for kube-apiserver [de79f87bc28449077936386f603cc5df933f63455bc96cf6ff7e7c6e4bd32bc4] ...
	I0116 03:48:21.422926  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de79f87bc28449077936386f603cc5df933f63455bc96cf6ff7e7c6e4bd32bc4"
	I0116 03:48:21.473295  507339 logs.go:123] Gathering logs for kube-scheduler [33381edd7dded1b5ed158738429b4a7f1f03a6171f2af0832bb5a9864c950725] ...
	I0116 03:48:21.473332  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33381edd7dded1b5ed158738429b4a7f1f03a6171f2af0832bb5a9864c950725"
	I0116 03:48:21.527066  507339 logs.go:123] Gathering logs for container status ...
	I0116 03:48:21.527110  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0116 03:48:24.085743  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:48:24.105359  507339 api_server.go:72] duration metric: took 4m17.107229414s to wait for apiserver process to appear ...
	I0116 03:48:24.105395  507339 api_server.go:88] waiting for apiserver healthz status ...
	I0116 03:48:24.105450  507339 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0116 03:48:24.105567  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0116 03:48:24.154626  507339 cri.go:89] found id: "de79f87bc28449077936386f603cc5df933f63455bc96cf6ff7e7c6e4bd32bc4"
	I0116 03:48:24.154659  507339 cri.go:89] found id: ""
	I0116 03:48:24.154668  507339 logs.go:284] 1 containers: [de79f87bc28449077936386f603cc5df933f63455bc96cf6ff7e7c6e4bd32bc4]
	I0116 03:48:24.154720  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:24.159657  507339 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0116 03:48:24.159735  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0116 03:48:24.202635  507339 cri.go:89] found id: "01aaf51cd40b9137f2395404bb15b7372927f81dc8a4e884600dc8cba9bbeb8e"
	I0116 03:48:24.202663  507339 cri.go:89] found id: ""
	I0116 03:48:24.202671  507339 logs.go:284] 1 containers: [01aaf51cd40b9137f2395404bb15b7372927f81dc8a4e884600dc8cba9bbeb8e]
	I0116 03:48:24.202725  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:24.207503  507339 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0116 03:48:24.207578  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0116 03:48:24.253893  507339 cri.go:89] found id: "c13ef036a1014a6bbc5c179a0889fb6ff615988713589da58240b7c637adf687"
	I0116 03:48:24.253934  507339 cri.go:89] found id: ""
	I0116 03:48:24.253945  507339 logs.go:284] 1 containers: [c13ef036a1014a6bbc5c179a0889fb6ff615988713589da58240b7c637adf687]
	I0116 03:48:24.254016  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:24.258649  507339 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0116 03:48:24.258733  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0116 03:48:24.306636  507339 cri.go:89] found id: "33381edd7dded1b5ed158738429b4a7f1f03a6171f2af0832bb5a9864c950725"
	I0116 03:48:24.306662  507339 cri.go:89] found id: ""
	I0116 03:48:24.306670  507339 logs.go:284] 1 containers: [33381edd7dded1b5ed158738429b4a7f1f03a6171f2af0832bb5a9864c950725]
	I0116 03:48:24.306721  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:24.311270  507339 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0116 03:48:24.311357  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0116 03:48:24.354635  507339 cri.go:89] found id: "eba2964f029ac61d9cfb59dc548ae05c832f9b23713f51924291fbfc5de985ed"
	I0116 03:48:24.354671  507339 cri.go:89] found id: ""
	I0116 03:48:24.354683  507339 logs.go:284] 1 containers: [eba2964f029ac61d9cfb59dc548ae05c832f9b23713f51924291fbfc5de985ed]
	I0116 03:48:24.354756  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:24.359806  507339 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0116 03:48:24.359889  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0116 03:48:24.418188  507339 cri.go:89] found id: "802d4c55aa04370ff079f09dfe4670f5dac1142ca6db880afa17be75f1d20a76"
	I0116 03:48:24.418239  507339 cri.go:89] found id: ""
	I0116 03:48:24.418251  507339 logs.go:284] 1 containers: [802d4c55aa04370ff079f09dfe4670f5dac1142ca6db880afa17be75f1d20a76]
	I0116 03:48:24.418330  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:24.422943  507339 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0116 03:48:24.423030  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0116 03:48:24.467349  507339 cri.go:89] found id: ""
	I0116 03:48:24.467383  507339 logs.go:284] 0 containers: []
	W0116 03:48:24.467394  507339 logs.go:286] No container was found matching "kindnet"
	I0116 03:48:24.467403  507339 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0116 03:48:24.467466  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0116 03:48:24.517490  507339 cri.go:89] found id: "b7164c1b7732cf6d54642cb9ffc8d9f16696adc0e428c3fa16cf914420d73e1d"
	I0116 03:48:24.517525  507339 cri.go:89] found id: "59754e94eb3cf3fecfedb9077fbad2ecc2618c351fa9bfa3faed0d780fdd9ecb"
	I0116 03:48:24.517539  507339 cri.go:89] found id: ""
	I0116 03:48:24.517548  507339 logs.go:284] 2 containers: [b7164c1b7732cf6d54642cb9ffc8d9f16696adc0e428c3fa16cf914420d73e1d 59754e94eb3cf3fecfedb9077fbad2ecc2618c351fa9bfa3faed0d780fdd9ecb]
	I0116 03:48:24.517619  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:24.521952  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:24.526246  507339 logs.go:123] Gathering logs for kube-apiserver [de79f87bc28449077936386f603cc5df933f63455bc96cf6ff7e7c6e4bd32bc4] ...
	I0116 03:48:24.526277  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de79f87bc28449077936386f603cc5df933f63455bc96cf6ff7e7c6e4bd32bc4"
	I0116 03:48:24.583067  507339 logs.go:123] Gathering logs for kube-proxy [eba2964f029ac61d9cfb59dc548ae05c832f9b23713f51924291fbfc5de985ed] ...
	I0116 03:48:24.583108  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eba2964f029ac61d9cfb59dc548ae05c832f9b23713f51924291fbfc5de985ed"
	I0116 03:48:24.631278  507339 logs.go:123] Gathering logs for CRI-O ...
	I0116 03:48:24.631312  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0116 03:48:25.099279  507339 logs.go:123] Gathering logs for describe nodes ...
	I0116 03:48:25.099330  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0116 03:48:25.241388  507339 logs.go:123] Gathering logs for etcd [01aaf51cd40b9137f2395404bb15b7372927f81dc8a4e884600dc8cba9bbeb8e] ...
	I0116 03:48:25.241433  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 01aaf51cd40b9137f2395404bb15b7372927f81dc8a4e884600dc8cba9bbeb8e"
	I0116 03:48:25.298748  507339 logs.go:123] Gathering logs for storage-provisioner [59754e94eb3cf3fecfedb9077fbad2ecc2618c351fa9bfa3faed0d780fdd9ecb] ...
	I0116 03:48:25.298787  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59754e94eb3cf3fecfedb9077fbad2ecc2618c351fa9bfa3faed0d780fdd9ecb"
	I0116 03:48:25.338169  507339 logs.go:123] Gathering logs for kubelet ...
	I0116 03:48:25.338204  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0116 03:48:25.396275  507339 logs.go:123] Gathering logs for coredns [c13ef036a1014a6bbc5c179a0889fb6ff615988713589da58240b7c637adf687] ...
	I0116 03:48:25.396320  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c13ef036a1014a6bbc5c179a0889fb6ff615988713589da58240b7c637adf687"
	I0116 03:48:25.448028  507339 logs.go:123] Gathering logs for storage-provisioner [b7164c1b7732cf6d54642cb9ffc8d9f16696adc0e428c3fa16cf914420d73e1d] ...
	I0116 03:48:25.448087  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b7164c1b7732cf6d54642cb9ffc8d9f16696adc0e428c3fa16cf914420d73e1d"
	I0116 03:48:25.492640  507339 logs.go:123] Gathering logs for container status ...
	I0116 03:48:25.492673  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0116 03:48:25.541478  507339 logs.go:123] Gathering logs for dmesg ...
	I0116 03:48:25.541572  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0116 03:48:25.557537  507339 logs.go:123] Gathering logs for kube-scheduler [33381edd7dded1b5ed158738429b4a7f1f03a6171f2af0832bb5a9864c950725] ...
	I0116 03:48:25.557569  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33381edd7dded1b5ed158738429b4a7f1f03a6171f2af0832bb5a9864c950725"
	I0116 03:48:25.599921  507339 logs.go:123] Gathering logs for kube-controller-manager [802d4c55aa04370ff079f09dfe4670f5dac1142ca6db880afa17be75f1d20a76] ...
	I0116 03:48:25.599956  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 802d4c55aa04370ff079f09dfe4670f5dac1142ca6db880afa17be75f1d20a76"
	I0116 03:48:23.368308  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:25.368495  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:22.825098  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:24.827094  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:24.942708  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:27.441008  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:29.452010  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:28.158281  507339 api_server.go:253] Checking apiserver healthz at https://192.168.39.103:8443/healthz ...
	I0116 03:48:28.165500  507339 api_server.go:279] https://192.168.39.103:8443/healthz returned 200:
	ok
	I0116 03:48:28.166907  507339 api_server.go:141] control plane version: v1.29.0-rc.2
	I0116 03:48:28.166933  507339 api_server.go:131] duration metric: took 4.061531357s to wait for apiserver health ...
	I0116 03:48:28.166943  507339 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 03:48:28.166996  507339 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0116 03:48:28.167056  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0116 03:48:28.209247  507339 cri.go:89] found id: "de79f87bc28449077936386f603cc5df933f63455bc96cf6ff7e7c6e4bd32bc4"
	I0116 03:48:28.209282  507339 cri.go:89] found id: ""
	I0116 03:48:28.209295  507339 logs.go:284] 1 containers: [de79f87bc28449077936386f603cc5df933f63455bc96cf6ff7e7c6e4bd32bc4]
	I0116 03:48:28.209361  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:28.214044  507339 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0116 03:48:28.214126  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0116 03:48:28.263791  507339 cri.go:89] found id: "01aaf51cd40b9137f2395404bb15b7372927f81dc8a4e884600dc8cba9bbeb8e"
	I0116 03:48:28.263817  507339 cri.go:89] found id: ""
	I0116 03:48:28.263825  507339 logs.go:284] 1 containers: [01aaf51cd40b9137f2395404bb15b7372927f81dc8a4e884600dc8cba9bbeb8e]
	I0116 03:48:28.263889  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:28.268803  507339 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0116 03:48:28.268893  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0116 03:48:28.311035  507339 cri.go:89] found id: "c13ef036a1014a6bbc5c179a0889fb6ff615988713589da58240b7c637adf687"
	I0116 03:48:28.311062  507339 cri.go:89] found id: ""
	I0116 03:48:28.311070  507339 logs.go:284] 1 containers: [c13ef036a1014a6bbc5c179a0889fb6ff615988713589da58240b7c637adf687]
	I0116 03:48:28.311132  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:28.315791  507339 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0116 03:48:28.315871  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0116 03:48:28.366917  507339 cri.go:89] found id: "33381edd7dded1b5ed158738429b4a7f1f03a6171f2af0832bb5a9864c950725"
	I0116 03:48:28.366947  507339 cri.go:89] found id: ""
	I0116 03:48:28.366957  507339 logs.go:284] 1 containers: [33381edd7dded1b5ed158738429b4a7f1f03a6171f2af0832bb5a9864c950725]
	I0116 03:48:28.367028  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:28.372648  507339 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0116 03:48:28.372723  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0116 03:48:28.415530  507339 cri.go:89] found id: "eba2964f029ac61d9cfb59dc548ae05c832f9b23713f51924291fbfc5de985ed"
	I0116 03:48:28.415566  507339 cri.go:89] found id: ""
	I0116 03:48:28.415577  507339 logs.go:284] 1 containers: [eba2964f029ac61d9cfb59dc548ae05c832f9b23713f51924291fbfc5de985ed]
	I0116 03:48:28.415669  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:28.420784  507339 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0116 03:48:28.420865  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0116 03:48:28.474238  507339 cri.go:89] found id: "802d4c55aa04370ff079f09dfe4670f5dac1142ca6db880afa17be75f1d20a76"
	I0116 03:48:28.474262  507339 cri.go:89] found id: ""
	I0116 03:48:28.474270  507339 logs.go:284] 1 containers: [802d4c55aa04370ff079f09dfe4670f5dac1142ca6db880afa17be75f1d20a76]
	I0116 03:48:28.474335  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:28.479547  507339 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0116 03:48:28.479637  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0116 03:48:28.526403  507339 cri.go:89] found id: ""
	I0116 03:48:28.526436  507339 logs.go:284] 0 containers: []
	W0116 03:48:28.526455  507339 logs.go:286] No container was found matching "kindnet"
	I0116 03:48:28.526466  507339 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0116 03:48:28.526535  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0116 03:48:28.572958  507339 cri.go:89] found id: "b7164c1b7732cf6d54642cb9ffc8d9f16696adc0e428c3fa16cf914420d73e1d"
	I0116 03:48:28.572988  507339 cri.go:89] found id: "59754e94eb3cf3fecfedb9077fbad2ecc2618c351fa9bfa3faed0d780fdd9ecb"
	I0116 03:48:28.572994  507339 cri.go:89] found id: ""
	I0116 03:48:28.573002  507339 logs.go:284] 2 containers: [b7164c1b7732cf6d54642cb9ffc8d9f16696adc0e428c3fa16cf914420d73e1d 59754e94eb3cf3fecfedb9077fbad2ecc2618c351fa9bfa3faed0d780fdd9ecb]
	I0116 03:48:28.573064  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:28.579388  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:28.585318  507339 logs.go:123] Gathering logs for kube-apiserver [de79f87bc28449077936386f603cc5df933f63455bc96cf6ff7e7c6e4bd32bc4] ...
	I0116 03:48:28.585355  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de79f87bc28449077936386f603cc5df933f63455bc96cf6ff7e7c6e4bd32bc4"
	I0116 03:48:28.640376  507339 logs.go:123] Gathering logs for kube-controller-manager [802d4c55aa04370ff079f09dfe4670f5dac1142ca6db880afa17be75f1d20a76] ...
	I0116 03:48:28.640419  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 802d4c55aa04370ff079f09dfe4670f5dac1142ca6db880afa17be75f1d20a76"
	I0116 03:48:28.701292  507339 logs.go:123] Gathering logs for storage-provisioner [59754e94eb3cf3fecfedb9077fbad2ecc2618c351fa9bfa3faed0d780fdd9ecb] ...
	I0116 03:48:28.701332  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59754e94eb3cf3fecfedb9077fbad2ecc2618c351fa9bfa3faed0d780fdd9ecb"
	I0116 03:48:28.744571  507339 logs.go:123] Gathering logs for container status ...
	I0116 03:48:28.744605  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0116 03:48:28.794905  507339 logs.go:123] Gathering logs for kubelet ...
	I0116 03:48:28.794942  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0116 03:48:28.847687  507339 logs.go:123] Gathering logs for dmesg ...
	I0116 03:48:28.847736  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0116 03:48:28.861641  507339 logs.go:123] Gathering logs for describe nodes ...
	I0116 03:48:28.861690  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0116 03:48:29.036673  507339 logs.go:123] Gathering logs for kube-proxy [eba2964f029ac61d9cfb59dc548ae05c832f9b23713f51924291fbfc5de985ed] ...
	I0116 03:48:29.036709  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eba2964f029ac61d9cfb59dc548ae05c832f9b23713f51924291fbfc5de985ed"
	I0116 03:48:29.084792  507339 logs.go:123] Gathering logs for CRI-O ...
	I0116 03:48:29.084823  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0116 03:48:29.449656  507339 logs.go:123] Gathering logs for etcd [01aaf51cd40b9137f2395404bb15b7372927f81dc8a4e884600dc8cba9bbeb8e] ...
	I0116 03:48:29.449707  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 01aaf51cd40b9137f2395404bb15b7372927f81dc8a4e884600dc8cba9bbeb8e"
	I0116 03:48:29.502412  507339 logs.go:123] Gathering logs for storage-provisioner [b7164c1b7732cf6d54642cb9ffc8d9f16696adc0e428c3fa16cf914420d73e1d] ...
	I0116 03:48:29.502460  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b7164c1b7732cf6d54642cb9ffc8d9f16696adc0e428c3fa16cf914420d73e1d"
	I0116 03:48:29.546471  507339 logs.go:123] Gathering logs for coredns [c13ef036a1014a6bbc5c179a0889fb6ff615988713589da58240b7c637adf687] ...
	I0116 03:48:29.546520  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c13ef036a1014a6bbc5c179a0889fb6ff615988713589da58240b7c637adf687"
	I0116 03:48:29.594282  507339 logs.go:123] Gathering logs for kube-scheduler [33381edd7dded1b5ed158738429b4a7f1f03a6171f2af0832bb5a9864c950725] ...
	I0116 03:48:29.594329  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33381edd7dded1b5ed158738429b4a7f1f03a6171f2af0832bb5a9864c950725"
	I0116 03:48:27.867485  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:29.868504  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:27.324987  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:29.325330  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:31.329373  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:32.146165  507339 system_pods.go:59] 8 kube-system pods found
	I0116 03:48:32.146209  507339 system_pods.go:61] "coredns-76f75df574-lr95b" [15dc0b11-f7ec-4729-bbfa-79b9649fbad6] Running
	I0116 03:48:32.146218  507339 system_pods.go:61] "etcd-no-preload-666547" [f98dcc57-5869-4a35-b443-2e6c9beddd88] Running
	I0116 03:48:32.146225  507339 system_pods.go:61] "kube-apiserver-no-preload-666547" [3aceae9d-224a-4e8c-a02d-ad4199d8d558] Running
	I0116 03:48:32.146232  507339 system_pods.go:61] "kube-controller-manager-no-preload-666547" [af39c89c-3b41-4d6f-b075-16de04a3ecc0] Running
	I0116 03:48:32.146238  507339 system_pods.go:61] "kube-proxy-dcmrn" [1e91c96f-cbc5-424d-a09e-06e34bf7a2e2] Running
	I0116 03:48:32.146244  507339 system_pods.go:61] "kube-scheduler-no-preload-666547" [0f2aa713-aebc-4858-a348-32169021235e] Running
	I0116 03:48:32.146253  507339 system_pods.go:61] "metrics-server-57f55c9bc5-78vfj" [dbd2d3b2-ec0f-4253-8549-7c4299522c37] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:48:32.146261  507339 system_pods.go:61] "storage-provisioner" [f4e1ba45-217d-41d5-b583-2f60044879bc] Running
	I0116 03:48:32.146272  507339 system_pods.go:74] duration metric: took 3.979321091s to wait for pod list to return data ...
	I0116 03:48:32.146286  507339 default_sa.go:34] waiting for default service account to be created ...
	I0116 03:48:32.149674  507339 default_sa.go:45] found service account: "default"
	I0116 03:48:32.149702  507339 default_sa.go:55] duration metric: took 3.408362ms for default service account to be created ...
	I0116 03:48:32.149710  507339 system_pods.go:116] waiting for k8s-apps to be running ...
	I0116 03:48:32.160459  507339 system_pods.go:86] 8 kube-system pods found
	I0116 03:48:32.160495  507339 system_pods.go:89] "coredns-76f75df574-lr95b" [15dc0b11-f7ec-4729-bbfa-79b9649fbad6] Running
	I0116 03:48:32.160503  507339 system_pods.go:89] "etcd-no-preload-666547" [f98dcc57-5869-4a35-b443-2e6c9beddd88] Running
	I0116 03:48:32.160510  507339 system_pods.go:89] "kube-apiserver-no-preload-666547" [3aceae9d-224a-4e8c-a02d-ad4199d8d558] Running
	I0116 03:48:32.160518  507339 system_pods.go:89] "kube-controller-manager-no-preload-666547" [af39c89c-3b41-4d6f-b075-16de04a3ecc0] Running
	I0116 03:48:32.160524  507339 system_pods.go:89] "kube-proxy-dcmrn" [1e91c96f-cbc5-424d-a09e-06e34bf7a2e2] Running
	I0116 03:48:32.160529  507339 system_pods.go:89] "kube-scheduler-no-preload-666547" [0f2aa713-aebc-4858-a348-32169021235e] Running
	I0116 03:48:32.160540  507339 system_pods.go:89] "metrics-server-57f55c9bc5-78vfj" [dbd2d3b2-ec0f-4253-8549-7c4299522c37] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:48:32.160548  507339 system_pods.go:89] "storage-provisioner" [f4e1ba45-217d-41d5-b583-2f60044879bc] Running
	I0116 03:48:32.160560  507339 system_pods.go:126] duration metric: took 10.843124ms to wait for k8s-apps to be running ...
	I0116 03:48:32.160569  507339 system_svc.go:44] waiting for kubelet service to be running ....
	I0116 03:48:32.160629  507339 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:48:32.179349  507339 system_svc.go:56] duration metric: took 18.767357ms WaitForService to wait for kubelet.
	I0116 03:48:32.179391  507339 kubeadm.go:581] duration metric: took 4m25.181271548s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0116 03:48:32.179426  507339 node_conditions.go:102] verifying NodePressure condition ...
	I0116 03:48:32.185135  507339 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 03:48:32.185165  507339 node_conditions.go:123] node cpu capacity is 2
	I0116 03:48:32.185198  507339 node_conditions.go:105] duration metric: took 5.766084ms to run NodePressure ...
	I0116 03:48:32.185219  507339 start.go:228] waiting for startup goroutines ...
	I0116 03:48:32.185228  507339 start.go:233] waiting for cluster config update ...
	I0116 03:48:32.185269  507339 start.go:242] writing updated cluster config ...
	I0116 03:48:32.185860  507339 ssh_runner.go:195] Run: rm -f paused
	I0116 03:48:32.243812  507339 start.go:600] kubectl: 1.29.0, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0116 03:48:32.246056  507339 out.go:177] * Done! kubectl is now configured to use "no-preload-666547" cluster and "default" namespace by default
	I0116 03:48:31.940664  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:33.941163  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:31.868778  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:34.367292  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:33.825761  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:35.829060  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:36.440459  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:38.440778  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:36.367672  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:38.867024  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:40.867193  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:38.325077  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:40.326947  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:40.440990  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:42.942197  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:43.365931  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:45.367057  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:42.826200  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:44.827292  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:45.441601  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:47.443035  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:47.367959  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:49.867083  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:47.326224  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:49.326339  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:49.940592  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:51.942424  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:54.440478  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:51.868254  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:54.368867  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:51.825317  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:52.325756  507889 pod_ready.go:81] duration metric: took 4m0.008011182s waiting for pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace to be "Ready" ...
	E0116 03:48:52.325782  507889 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0116 03:48:52.325790  507889 pod_ready.go:38] duration metric: took 4m4.320002841s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:48:52.325804  507889 api_server.go:52] waiting for apiserver process to appear ...
	I0116 03:48:52.325855  507889 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0116 03:48:52.325905  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0116 03:48:52.394600  507889 cri.go:89] found id: "f9861ff0fbab72660648bfcb49b82810bada99f73a55cb0aa359ba114825b8f3"
	I0116 03:48:52.394624  507889 cri.go:89] found id: ""
	I0116 03:48:52.394632  507889 logs.go:284] 1 containers: [f9861ff0fbab72660648bfcb49b82810bada99f73a55cb0aa359ba114825b8f3]
	I0116 03:48:52.394716  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:48:52.400137  507889 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0116 03:48:52.400232  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0116 03:48:52.444453  507889 cri.go:89] found id: "e2758ac4468b10765b35629ffc54228655bc18f79a654cc03e91929ebdc3cf90"
	I0116 03:48:52.444485  507889 cri.go:89] found id: ""
	I0116 03:48:52.444495  507889 logs.go:284] 1 containers: [e2758ac4468b10765b35629ffc54228655bc18f79a654cc03e91929ebdc3cf90]
	I0116 03:48:52.444557  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:48:52.449850  507889 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0116 03:48:52.450002  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0116 03:48:52.499160  507889 cri.go:89] found id: "a07ae23e6e9e3c589264d0be7095896549a4b71fa2d0b06805a3b1b3bea16f6a"
	I0116 03:48:52.499204  507889 cri.go:89] found id: ""
	I0116 03:48:52.499216  507889 logs.go:284] 1 containers: [a07ae23e6e9e3c589264d0be7095896549a4b71fa2d0b06805a3b1b3bea16f6a]
	I0116 03:48:52.499286  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:48:52.504257  507889 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0116 03:48:52.504357  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0116 03:48:52.563747  507889 cri.go:89] found id: "e60387e0e2800d67c99eb3566f31ad675c0db6c22379fc28ee9a447af5f5023c"
	I0116 03:48:52.563782  507889 cri.go:89] found id: ""
	I0116 03:48:52.563790  507889 logs.go:284] 1 containers: [e60387e0e2800d67c99eb3566f31ad675c0db6c22379fc28ee9a447af5f5023c]
	I0116 03:48:52.563860  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:48:52.568676  507889 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0116 03:48:52.568771  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0116 03:48:52.617090  507889 cri.go:89] found id: "44f71a7069827d97c7326cbd36638ec36ff39cbbe0a1e4e61e7c010a38d2e123"
	I0116 03:48:52.617136  507889 cri.go:89] found id: ""
	I0116 03:48:52.617149  507889 logs.go:284] 1 containers: [44f71a7069827d97c7326cbd36638ec36ff39cbbe0a1e4e61e7c010a38d2e123]
	I0116 03:48:52.617222  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:48:52.622121  507889 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0116 03:48:52.622224  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0116 03:48:52.685004  507889 cri.go:89] found id: "1438a3832328a03b94c3d408a2e332c0fb0adab915a9d4f26994e84c0509fac9"
	I0116 03:48:52.685033  507889 cri.go:89] found id: ""
	I0116 03:48:52.685043  507889 logs.go:284] 1 containers: [1438a3832328a03b94c3d408a2e332c0fb0adab915a9d4f26994e84c0509fac9]
	I0116 03:48:52.685113  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:48:52.689837  507889 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0116 03:48:52.689913  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0116 03:48:52.730008  507889 cri.go:89] found id: ""
	I0116 03:48:52.730034  507889 logs.go:284] 0 containers: []
	W0116 03:48:52.730044  507889 logs.go:286] No container was found matching "kindnet"
	I0116 03:48:52.730051  507889 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0116 03:48:52.730120  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0116 03:48:52.780523  507889 cri.go:89] found id: "33ba3a03d878abc86a194defd715bb61a0066e49063e3823002bd3ec421da138"
	I0116 03:48:52.780554  507889 cri.go:89] found id: "a4b27881ef90cea325e8f306fac88a76052eec15a693c291288664b8f4ebcc2a"
	I0116 03:48:52.780562  507889 cri.go:89] found id: ""
	I0116 03:48:52.780571  507889 logs.go:284] 2 containers: [33ba3a03d878abc86a194defd715bb61a0066e49063e3823002bd3ec421da138 a4b27881ef90cea325e8f306fac88a76052eec15a693c291288664b8f4ebcc2a]
	I0116 03:48:52.780641  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:48:52.787305  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:48:52.791352  507889 logs.go:123] Gathering logs for kubelet ...
	I0116 03:48:52.791383  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0116 03:48:52.859099  507889 logs.go:123] Gathering logs for etcd [e2758ac4468b10765b35629ffc54228655bc18f79a654cc03e91929ebdc3cf90] ...
	I0116 03:48:52.859152  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2758ac4468b10765b35629ffc54228655bc18f79a654cc03e91929ebdc3cf90"
	I0116 03:48:52.912806  507889 logs.go:123] Gathering logs for coredns [a07ae23e6e9e3c589264d0be7095896549a4b71fa2d0b06805a3b1b3bea16f6a] ...
	I0116 03:48:52.912852  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a07ae23e6e9e3c589264d0be7095896549a4b71fa2d0b06805a3b1b3bea16f6a"
	I0116 03:48:52.960880  507889 logs.go:123] Gathering logs for kube-controller-manager [1438a3832328a03b94c3d408a2e332c0fb0adab915a9d4f26994e84c0509fac9] ...
	I0116 03:48:52.960919  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1438a3832328a03b94c3d408a2e332c0fb0adab915a9d4f26994e84c0509fac9"
	I0116 03:48:53.023064  507889 logs.go:123] Gathering logs for CRI-O ...
	I0116 03:48:53.023110  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0116 03:48:53.524890  507889 logs.go:123] Gathering logs for kube-apiserver [f9861ff0fbab72660648bfcb49b82810bada99f73a55cb0aa359ba114825b8f3] ...
	I0116 03:48:53.524934  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9861ff0fbab72660648bfcb49b82810bada99f73a55cb0aa359ba114825b8f3"
	I0116 03:48:53.587550  507889 logs.go:123] Gathering logs for kube-scheduler [e60387e0e2800d67c99eb3566f31ad675c0db6c22379fc28ee9a447af5f5023c] ...
	I0116 03:48:53.587594  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e60387e0e2800d67c99eb3566f31ad675c0db6c22379fc28ee9a447af5f5023c"
	I0116 03:48:53.627986  507889 logs.go:123] Gathering logs for kube-proxy [44f71a7069827d97c7326cbd36638ec36ff39cbbe0a1e4e61e7c010a38d2e123] ...
	I0116 03:48:53.628029  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44f71a7069827d97c7326cbd36638ec36ff39cbbe0a1e4e61e7c010a38d2e123"
	I0116 03:48:53.671704  507889 logs.go:123] Gathering logs for dmesg ...
	I0116 03:48:53.671739  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0116 03:48:53.686333  507889 logs.go:123] Gathering logs for describe nodes ...
	I0116 03:48:53.686370  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0116 03:48:53.855391  507889 logs.go:123] Gathering logs for storage-provisioner [33ba3a03d878abc86a194defd715bb61a0066e49063e3823002bd3ec421da138] ...
	I0116 03:48:53.855435  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33ba3a03d878abc86a194defd715bb61a0066e49063e3823002bd3ec421da138"
	I0116 03:48:53.906028  507889 logs.go:123] Gathering logs for storage-provisioner [a4b27881ef90cea325e8f306fac88a76052eec15a693c291288664b8f4ebcc2a] ...
	I0116 03:48:53.906064  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4b27881ef90cea325e8f306fac88a76052eec15a693c291288664b8f4ebcc2a"
	I0116 03:48:53.945386  507889 logs.go:123] Gathering logs for container status ...
	I0116 03:48:53.945419  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0116 03:48:56.498685  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:48:56.516768  507889 api_server.go:72] duration metric: took 4m13.505914609s to wait for apiserver process to appear ...
	I0116 03:48:56.516797  507889 api_server.go:88] waiting for apiserver healthz status ...
	I0116 03:48:56.516836  507889 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0116 03:48:56.516907  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0116 03:48:56.563236  507889 cri.go:89] found id: "f9861ff0fbab72660648bfcb49b82810bada99f73a55cb0aa359ba114825b8f3"
	I0116 03:48:56.563272  507889 cri.go:89] found id: ""
	I0116 03:48:56.563283  507889 logs.go:284] 1 containers: [f9861ff0fbab72660648bfcb49b82810bada99f73a55cb0aa359ba114825b8f3]
	I0116 03:48:56.563356  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:48:56.568012  507889 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0116 03:48:56.568188  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0116 03:48:56.443226  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:58.940353  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:56.868597  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:59.366906  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:56.613095  507889 cri.go:89] found id: "e2758ac4468b10765b35629ffc54228655bc18f79a654cc03e91929ebdc3cf90"
	I0116 03:48:56.613120  507889 cri.go:89] found id: ""
	I0116 03:48:56.613129  507889 logs.go:284] 1 containers: [e2758ac4468b10765b35629ffc54228655bc18f79a654cc03e91929ebdc3cf90]
	I0116 03:48:56.613190  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:48:56.618736  507889 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0116 03:48:56.618827  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0116 03:48:56.672773  507889 cri.go:89] found id: "a07ae23e6e9e3c589264d0be7095896549a4b71fa2d0b06805a3b1b3bea16f6a"
	I0116 03:48:56.672796  507889 cri.go:89] found id: ""
	I0116 03:48:56.672805  507889 logs.go:284] 1 containers: [a07ae23e6e9e3c589264d0be7095896549a4b71fa2d0b06805a3b1b3bea16f6a]
	I0116 03:48:56.672855  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:48:56.679218  507889 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0116 03:48:56.679293  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0116 03:48:56.724517  507889 cri.go:89] found id: "e60387e0e2800d67c99eb3566f31ad675c0db6c22379fc28ee9a447af5f5023c"
	I0116 03:48:56.724547  507889 cri.go:89] found id: ""
	I0116 03:48:56.724555  507889 logs.go:284] 1 containers: [e60387e0e2800d67c99eb3566f31ad675c0db6c22379fc28ee9a447af5f5023c]
	I0116 03:48:56.724622  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:48:56.730061  507889 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0116 03:48:56.730146  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0116 03:48:56.775380  507889 cri.go:89] found id: "44f71a7069827d97c7326cbd36638ec36ff39cbbe0a1e4e61e7c010a38d2e123"
	I0116 03:48:56.775413  507889 cri.go:89] found id: ""
	I0116 03:48:56.775423  507889 logs.go:284] 1 containers: [44f71a7069827d97c7326cbd36638ec36ff39cbbe0a1e4e61e7c010a38d2e123]
	I0116 03:48:56.775494  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:48:56.781085  507889 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0116 03:48:56.781183  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0116 03:48:56.830030  507889 cri.go:89] found id: "1438a3832328a03b94c3d408a2e332c0fb0adab915a9d4f26994e84c0509fac9"
	I0116 03:48:56.830067  507889 cri.go:89] found id: ""
	I0116 03:48:56.830076  507889 logs.go:284] 1 containers: [1438a3832328a03b94c3d408a2e332c0fb0adab915a9d4f26994e84c0509fac9]
	I0116 03:48:56.830163  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:48:56.834956  507889 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0116 03:48:56.835035  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0116 03:48:56.882972  507889 cri.go:89] found id: ""
	I0116 03:48:56.883001  507889 logs.go:284] 0 containers: []
	W0116 03:48:56.883013  507889 logs.go:286] No container was found matching "kindnet"
	I0116 03:48:56.883022  507889 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0116 03:48:56.883095  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0116 03:48:56.925520  507889 cri.go:89] found id: "33ba3a03d878abc86a194defd715bb61a0066e49063e3823002bd3ec421da138"
	I0116 03:48:56.925553  507889 cri.go:89] found id: "a4b27881ef90cea325e8f306fac88a76052eec15a693c291288664b8f4ebcc2a"
	I0116 03:48:56.925560  507889 cri.go:89] found id: ""
	I0116 03:48:56.925574  507889 logs.go:284] 2 containers: [33ba3a03d878abc86a194defd715bb61a0066e49063e3823002bd3ec421da138 a4b27881ef90cea325e8f306fac88a76052eec15a693c291288664b8f4ebcc2a]
	I0116 03:48:56.925656  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:48:56.931331  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:48:56.936492  507889 logs.go:123] Gathering logs for storage-provisioner [a4b27881ef90cea325e8f306fac88a76052eec15a693c291288664b8f4ebcc2a] ...
	I0116 03:48:56.936527  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4b27881ef90cea325e8f306fac88a76052eec15a693c291288664b8f4ebcc2a"
	I0116 03:48:56.981819  507889 logs.go:123] Gathering logs for kubelet ...
	I0116 03:48:56.981851  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0116 03:48:57.045678  507889 logs.go:123] Gathering logs for dmesg ...
	I0116 03:48:57.045723  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0116 03:48:57.060832  507889 logs.go:123] Gathering logs for etcd [e2758ac4468b10765b35629ffc54228655bc18f79a654cc03e91929ebdc3cf90] ...
	I0116 03:48:57.060872  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2758ac4468b10765b35629ffc54228655bc18f79a654cc03e91929ebdc3cf90"
	I0116 03:48:57.123644  507889 logs.go:123] Gathering logs for kube-scheduler [e60387e0e2800d67c99eb3566f31ad675c0db6c22379fc28ee9a447af5f5023c] ...
	I0116 03:48:57.123695  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e60387e0e2800d67c99eb3566f31ad675c0db6c22379fc28ee9a447af5f5023c"
	I0116 03:48:57.170173  507889 logs.go:123] Gathering logs for kube-proxy [44f71a7069827d97c7326cbd36638ec36ff39cbbe0a1e4e61e7c010a38d2e123] ...
	I0116 03:48:57.170216  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44f71a7069827d97c7326cbd36638ec36ff39cbbe0a1e4e61e7c010a38d2e123"
	I0116 03:48:57.215434  507889 logs.go:123] Gathering logs for describe nodes ...
	I0116 03:48:57.215470  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0116 03:48:57.370036  507889 logs.go:123] Gathering logs for kube-controller-manager [1438a3832328a03b94c3d408a2e332c0fb0adab915a9d4f26994e84c0509fac9] ...
	I0116 03:48:57.370081  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1438a3832328a03b94c3d408a2e332c0fb0adab915a9d4f26994e84c0509fac9"
	I0116 03:48:57.432988  507889 logs.go:123] Gathering logs for container status ...
	I0116 03:48:57.433048  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0116 03:48:57.485239  507889 logs.go:123] Gathering logs for kube-apiserver [f9861ff0fbab72660648bfcb49b82810bada99f73a55cb0aa359ba114825b8f3] ...
	I0116 03:48:57.485284  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9861ff0fbab72660648bfcb49b82810bada99f73a55cb0aa359ba114825b8f3"
	I0116 03:48:57.547192  507889 logs.go:123] Gathering logs for coredns [a07ae23e6e9e3c589264d0be7095896549a4b71fa2d0b06805a3b1b3bea16f6a] ...
	I0116 03:48:57.547237  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a07ae23e6e9e3c589264d0be7095896549a4b71fa2d0b06805a3b1b3bea16f6a"
	I0116 03:48:57.598025  507889 logs.go:123] Gathering logs for storage-provisioner [33ba3a03d878abc86a194defd715bb61a0066e49063e3823002bd3ec421da138] ...
	I0116 03:48:57.598085  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33ba3a03d878abc86a194defd715bb61a0066e49063e3823002bd3ec421da138"
	I0116 03:48:57.644234  507889 logs.go:123] Gathering logs for CRI-O ...
	I0116 03:48:57.644271  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0116 03:49:00.562219  507889 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8444/healthz ...
	I0116 03:49:00.568196  507889 api_server.go:279] https://192.168.50.236:8444/healthz returned 200:
	ok
	I0116 03:49:00.571612  507889 api_server.go:141] control plane version: v1.28.4
	I0116 03:49:00.571655  507889 api_server.go:131] duration metric: took 4.0548511s to wait for apiserver health ...
	I0116 03:49:00.571668  507889 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 03:49:00.571701  507889 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0116 03:49:00.571774  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0116 03:49:00.623308  507889 cri.go:89] found id: "f9861ff0fbab72660648bfcb49b82810bada99f73a55cb0aa359ba114825b8f3"
	I0116 03:49:00.623344  507889 cri.go:89] found id: ""
	I0116 03:49:00.623355  507889 logs.go:284] 1 containers: [f9861ff0fbab72660648bfcb49b82810bada99f73a55cb0aa359ba114825b8f3]
	I0116 03:49:00.623418  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:49:00.630287  507889 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0116 03:49:00.630381  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0116 03:49:00.673225  507889 cri.go:89] found id: "e2758ac4468b10765b35629ffc54228655bc18f79a654cc03e91929ebdc3cf90"
	I0116 03:49:00.673265  507889 cri.go:89] found id: ""
	I0116 03:49:00.673276  507889 logs.go:284] 1 containers: [e2758ac4468b10765b35629ffc54228655bc18f79a654cc03e91929ebdc3cf90]
	I0116 03:49:00.673334  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:49:00.678677  507889 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0116 03:49:00.678768  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0116 03:49:00.723055  507889 cri.go:89] found id: "a07ae23e6e9e3c589264d0be7095896549a4b71fa2d0b06805a3b1b3bea16f6a"
	I0116 03:49:00.723081  507889 cri.go:89] found id: ""
	I0116 03:49:00.723089  507889 logs.go:284] 1 containers: [a07ae23e6e9e3c589264d0be7095896549a4b71fa2d0b06805a3b1b3bea16f6a]
	I0116 03:49:00.723148  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:49:00.727931  507889 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0116 03:49:00.728053  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0116 03:49:00.777602  507889 cri.go:89] found id: "e60387e0e2800d67c99eb3566f31ad675c0db6c22379fc28ee9a447af5f5023c"
	I0116 03:49:00.777639  507889 cri.go:89] found id: ""
	I0116 03:49:00.777651  507889 logs.go:284] 1 containers: [e60387e0e2800d67c99eb3566f31ad675c0db6c22379fc28ee9a447af5f5023c]
	I0116 03:49:00.777723  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:49:00.787121  507889 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0116 03:49:00.787206  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0116 03:49:00.835268  507889 cri.go:89] found id: "44f71a7069827d97c7326cbd36638ec36ff39cbbe0a1e4e61e7c010a38d2e123"
	I0116 03:49:00.835300  507889 cri.go:89] found id: ""
	I0116 03:49:00.835310  507889 logs.go:284] 1 containers: [44f71a7069827d97c7326cbd36638ec36ff39cbbe0a1e4e61e7c010a38d2e123]
	I0116 03:49:00.835378  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:49:00.842204  507889 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0116 03:49:00.842299  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0116 03:49:00.889511  507889 cri.go:89] found id: "1438a3832328a03b94c3d408a2e332c0fb0adab915a9d4f26994e84c0509fac9"
	I0116 03:49:00.889541  507889 cri.go:89] found id: ""
	I0116 03:49:00.889551  507889 logs.go:284] 1 containers: [1438a3832328a03b94c3d408a2e332c0fb0adab915a9d4f26994e84c0509fac9]
	I0116 03:49:00.889620  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:49:00.894964  507889 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0116 03:49:00.895059  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0116 03:49:00.937187  507889 cri.go:89] found id: ""
	I0116 03:49:00.937221  507889 logs.go:284] 0 containers: []
	W0116 03:49:00.937237  507889 logs.go:286] No container was found matching "kindnet"
	I0116 03:49:00.937246  507889 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0116 03:49:00.937313  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0116 03:49:00.977711  507889 cri.go:89] found id: "33ba3a03d878abc86a194defd715bb61a0066e49063e3823002bd3ec421da138"
	I0116 03:49:00.977740  507889 cri.go:89] found id: "a4b27881ef90cea325e8f306fac88a76052eec15a693c291288664b8f4ebcc2a"
	I0116 03:49:00.977748  507889 cri.go:89] found id: ""
	I0116 03:49:00.977756  507889 logs.go:284] 2 containers: [33ba3a03d878abc86a194defd715bb61a0066e49063e3823002bd3ec421da138 a4b27881ef90cea325e8f306fac88a76052eec15a693c291288664b8f4ebcc2a]
	I0116 03:49:00.977834  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:49:00.982886  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:49:00.988008  507889 logs.go:123] Gathering logs for describe nodes ...
	I0116 03:49:00.988061  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0116 03:49:01.115755  507889 logs.go:123] Gathering logs for dmesg ...
	I0116 03:49:01.115791  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0116 03:49:01.131706  507889 logs.go:123] Gathering logs for etcd [e2758ac4468b10765b35629ffc54228655bc18f79a654cc03e91929ebdc3cf90] ...
	I0116 03:49:01.131748  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2758ac4468b10765b35629ffc54228655bc18f79a654cc03e91929ebdc3cf90"
	I0116 03:49:01.186279  507889 logs.go:123] Gathering logs for kube-scheduler [e60387e0e2800d67c99eb3566f31ad675c0db6c22379fc28ee9a447af5f5023c] ...
	I0116 03:49:01.186324  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e60387e0e2800d67c99eb3566f31ad675c0db6c22379fc28ee9a447af5f5023c"
	I0116 03:49:01.231057  507889 logs.go:123] Gathering logs for kube-controller-manager [1438a3832328a03b94c3d408a2e332c0fb0adab915a9d4f26994e84c0509fac9] ...
	I0116 03:49:01.231100  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1438a3832328a03b94c3d408a2e332c0fb0adab915a9d4f26994e84c0509fac9"
	I0116 03:49:01.307541  507889 logs.go:123] Gathering logs for storage-provisioner [33ba3a03d878abc86a194defd715bb61a0066e49063e3823002bd3ec421da138] ...
	I0116 03:49:01.307586  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33ba3a03d878abc86a194defd715bb61a0066e49063e3823002bd3ec421da138"
	I0116 03:49:01.356517  507889 logs.go:123] Gathering logs for storage-provisioner [a4b27881ef90cea325e8f306fac88a76052eec15a693c291288664b8f4ebcc2a] ...
	I0116 03:49:01.356563  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4b27881ef90cea325e8f306fac88a76052eec15a693c291288664b8f4ebcc2a"
	I0116 03:49:01.409790  507889 logs.go:123] Gathering logs for kube-apiserver [f9861ff0fbab72660648bfcb49b82810bada99f73a55cb0aa359ba114825b8f3] ...
	I0116 03:49:01.409846  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9861ff0fbab72660648bfcb49b82810bada99f73a55cb0aa359ba114825b8f3"
	I0116 03:49:01.462029  507889 logs.go:123] Gathering logs for CRI-O ...
	I0116 03:49:01.462077  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0116 03:49:00.942100  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:49:02.942316  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:49:01.838933  507889 logs.go:123] Gathering logs for coredns [a07ae23e6e9e3c589264d0be7095896549a4b71fa2d0b06805a3b1b3bea16f6a] ...
	I0116 03:49:01.838999  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a07ae23e6e9e3c589264d0be7095896549a4b71fa2d0b06805a3b1b3bea16f6a"
	I0116 03:49:01.884022  507889 logs.go:123] Gathering logs for kube-proxy [44f71a7069827d97c7326cbd36638ec36ff39cbbe0a1e4e61e7c010a38d2e123] ...
	I0116 03:49:01.884075  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44f71a7069827d97c7326cbd36638ec36ff39cbbe0a1e4e61e7c010a38d2e123"
	I0116 03:49:01.930032  507889 logs.go:123] Gathering logs for container status ...
	I0116 03:49:01.930090  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0116 03:49:01.998827  507889 logs.go:123] Gathering logs for kubelet ...
	I0116 03:49:01.998863  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0116 03:49:04.573529  507889 system_pods.go:59] 8 kube-system pods found
	I0116 03:49:04.573571  507889 system_pods.go:61] "coredns-5dd5756b68-pmx8n" [9d3c9941-8938-4d4a-b6d8-9b542ca6f1ca] Running
	I0116 03:49:04.573579  507889 system_pods.go:61] "etcd-default-k8s-diff-port-434445" [a2327159-d034-4f62-a8f5-14062a41c75d] Running
	I0116 03:49:04.573587  507889 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-434445" [75c5fc5e-86b1-42cf-bb5c-8a1f8b4e13db] Running
	I0116 03:49:04.573594  507889 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-434445" [2568dd4f-124d-4c4f-8651-3b89b2b54983] Running
	I0116 03:49:04.573600  507889 system_pods.go:61] "kube-proxy-dcbqg" [eba1f9bf-6aa7-40cd-b57c-745c2d0cc414] Running
	I0116 03:49:04.573607  507889 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-434445" [479952a1-8c3d-49b4-bd22-fef988e6b83d] Running
	I0116 03:49:04.573617  507889 system_pods.go:61] "metrics-server-57f55c9bc5-894n2" [46e4892a-d026-4a9d-88bc-128e92848808] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:49:04.573626  507889 system_pods.go:61] "storage-provisioner" [16fd4585-3d75-40c3-a28d-4134375f4e3d] Running
	I0116 03:49:04.573638  507889 system_pods.go:74] duration metric: took 4.001961367s to wait for pod list to return data ...
	I0116 03:49:04.573657  507889 default_sa.go:34] waiting for default service account to be created ...
	I0116 03:49:04.577012  507889 default_sa.go:45] found service account: "default"
	I0116 03:49:04.577041  507889 default_sa.go:55] duration metric: took 3.376395ms for default service account to be created ...
	I0116 03:49:04.577051  507889 system_pods.go:116] waiting for k8s-apps to be running ...
	I0116 03:49:04.583833  507889 system_pods.go:86] 8 kube-system pods found
	I0116 03:49:04.583880  507889 system_pods.go:89] "coredns-5dd5756b68-pmx8n" [9d3c9941-8938-4d4a-b6d8-9b542ca6f1ca] Running
	I0116 03:49:04.583890  507889 system_pods.go:89] "etcd-default-k8s-diff-port-434445" [a2327159-d034-4f62-a8f5-14062a41c75d] Running
	I0116 03:49:04.583898  507889 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-434445" [75c5fc5e-86b1-42cf-bb5c-8a1f8b4e13db] Running
	I0116 03:49:04.583905  507889 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-434445" [2568dd4f-124d-4c4f-8651-3b89b2b54983] Running
	I0116 03:49:04.583911  507889 system_pods.go:89] "kube-proxy-dcbqg" [eba1f9bf-6aa7-40cd-b57c-745c2d0cc414] Running
	I0116 03:49:04.583918  507889 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-434445" [479952a1-8c3d-49b4-bd22-fef988e6b83d] Running
	I0116 03:49:04.583928  507889 system_pods.go:89] "metrics-server-57f55c9bc5-894n2" [46e4892a-d026-4a9d-88bc-128e92848808] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:49:04.583936  507889 system_pods.go:89] "storage-provisioner" [16fd4585-3d75-40c3-a28d-4134375f4e3d] Running
	I0116 03:49:04.583950  507889 system_pods.go:126] duration metric: took 6.89136ms to wait for k8s-apps to be running ...
	I0116 03:49:04.583964  507889 system_svc.go:44] waiting for kubelet service to be running ....
	I0116 03:49:04.584016  507889 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:49:04.600209  507889 system_svc.go:56] duration metric: took 16.229333ms WaitForService to wait for kubelet.
	I0116 03:49:04.600252  507889 kubeadm.go:581] duration metric: took 4m21.589410808s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0116 03:49:04.600285  507889 node_conditions.go:102] verifying NodePressure condition ...
	I0116 03:49:04.603774  507889 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 03:49:04.603803  507889 node_conditions.go:123] node cpu capacity is 2
	I0116 03:49:04.603815  507889 node_conditions.go:105] duration metric: took 3.52526ms to run NodePressure ...
	I0116 03:49:04.603829  507889 start.go:228] waiting for startup goroutines ...
	I0116 03:49:04.603836  507889 start.go:233] waiting for cluster config update ...
	I0116 03:49:04.603849  507889 start.go:242] writing updated cluster config ...
	I0116 03:49:04.604185  507889 ssh_runner.go:195] Run: rm -f paused
	I0116 03:49:04.658922  507889 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0116 03:49:04.661265  507889 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-434445" cluster and "default" namespace by default
	I0116 03:49:01.367935  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:49:03.867391  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:49:05.867519  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:49:05.440602  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:49:07.441041  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:49:08.434235  507510 pod_ready.go:81] duration metric: took 4m0.001038173s waiting for pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace to be "Ready" ...
	E0116 03:49:08.434278  507510 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace to be "Ready" (will not retry!)
	I0116 03:49:08.434304  507510 pod_ready.go:38] duration metric: took 4m1.20014772s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:49:08.434338  507510 kubeadm.go:640] restartCluster took 5m11.767236835s
	W0116 03:49:08.434423  507510 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0116 03:49:08.434463  507510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0116 03:49:07.868307  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:49:10.367347  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:49:15.339252  507510 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (6.904753674s)
	I0116 03:49:15.339341  507510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:49:15.355684  507510 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 03:49:15.371377  507510 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 03:49:15.393609  507510 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 03:49:15.393674  507510 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0116 03:49:15.478382  507510 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0116 03:49:15.478464  507510 kubeadm.go:322] [preflight] Running pre-flight checks
	I0116 03:49:15.663487  507510 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0116 03:49:15.663663  507510 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0116 03:49:15.663803  507510 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0116 03:49:15.940677  507510 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0116 03:49:15.940857  507510 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0116 03:49:15.949553  507510 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0116 03:49:16.075111  507510 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0116 03:49:12.867512  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:49:13.859320  507257 pod_ready.go:81] duration metric: took 4m0.000451049s waiting for pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace to be "Ready" ...
	E0116 03:49:13.859353  507257 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace to be "Ready" (will not retry!)
	I0116 03:49:13.859375  507257 pod_ready.go:38] duration metric: took 4m12.063407854s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:49:13.859418  507257 kubeadm.go:640] restartCluster took 4m32.047022773s
	W0116 03:49:13.859484  507257 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0116 03:49:13.859513  507257 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0116 03:49:16.077099  507510 out.go:204]   - Generating certificates and keys ...
	I0116 03:49:16.077224  507510 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0116 03:49:16.077305  507510 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0116 03:49:16.077410  507510 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0116 03:49:16.077504  507510 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0116 03:49:16.077617  507510 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0116 03:49:16.077745  507510 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0116 03:49:16.078085  507510 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0116 03:49:16.078639  507510 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0116 03:49:16.079112  507510 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0116 03:49:16.079719  507510 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0116 03:49:16.079935  507510 kubeadm.go:322] [certs] Using the existing "sa" key
	I0116 03:49:16.080015  507510 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0116 03:49:16.246902  507510 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0116 03:49:16.332722  507510 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0116 03:49:16.534277  507510 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0116 03:49:16.908642  507510 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0116 03:49:16.909711  507510 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0116 03:49:16.911960  507510 out.go:204]   - Booting up control plane ...
	I0116 03:49:16.912103  507510 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0116 03:49:16.923200  507510 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0116 03:49:16.924797  507510 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0116 03:49:16.926738  507510 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0116 03:49:16.937544  507510 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0116 03:49:27.943253  507510 kubeadm.go:322] [apiclient] All control plane components are healthy after 11.005405 seconds
	I0116 03:49:27.943474  507510 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0116 03:49:27.970644  507510 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I0116 03:49:28.500660  507510 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0116 03:49:28.500847  507510 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-696770 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0116 03:49:29.015036  507510 kubeadm.go:322] [bootstrap-token] Using token: nr2yh0.22ni19zxk2s7hw9l
	I0116 03:49:28.504409  507257 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.644866985s)
	I0116 03:49:28.504498  507257 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:49:28.519788  507257 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 03:49:28.531667  507257 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 03:49:28.543058  507257 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 03:49:28.543113  507257 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0116 03:49:28.603369  507257 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0116 03:49:28.603521  507257 kubeadm.go:322] [preflight] Running pre-flight checks
	I0116 03:49:28.784258  507257 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0116 03:49:28.784384  507257 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0116 03:49:28.784491  507257 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0116 03:49:29.068390  507257 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0116 03:49:29.017077  507510 out.go:204]   - Configuring RBAC rules ...
	I0116 03:49:29.017276  507510 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0116 03:49:29.044200  507510 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0116 03:49:29.049807  507510 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0116 03:49:29.054441  507510 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0116 03:49:29.057939  507510 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0116 03:49:29.142810  507510 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0116 03:49:29.439580  507510 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0116 03:49:29.441665  507510 kubeadm.go:322] 
	I0116 03:49:29.441736  507510 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0116 03:49:29.441741  507510 kubeadm.go:322] 
	I0116 03:49:29.441863  507510 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0116 03:49:29.441898  507510 kubeadm.go:322] 
	I0116 03:49:29.441932  507510 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0116 03:49:29.441999  507510 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0116 03:49:29.442057  507510 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0116 03:49:29.442099  507510 kubeadm.go:322] 
	I0116 03:49:29.442200  507510 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0116 03:49:29.442306  507510 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0116 03:49:29.442414  507510 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0116 03:49:29.442429  507510 kubeadm.go:322] 
	I0116 03:49:29.442566  507510 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I0116 03:49:29.442689  507510 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0116 03:49:29.442701  507510 kubeadm.go:322] 
	I0116 03:49:29.442813  507510 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token nr2yh0.22ni19zxk2s7hw9l \
	I0116 03:49:29.442967  507510 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:ab75761e1119a1709a216b682cd7b1dcce0641115df5f236b54b16e4f66aa044 \
	I0116 03:49:29.443008  507510 kubeadm.go:322]     --control-plane 	  
	I0116 03:49:29.443024  507510 kubeadm.go:322] 
	I0116 03:49:29.443147  507510 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0116 03:49:29.443159  507510 kubeadm.go:322] 
	I0116 03:49:29.443285  507510 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token nr2yh0.22ni19zxk2s7hw9l \
	I0116 03:49:29.443414  507510 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:ab75761e1119a1709a216b682cd7b1dcce0641115df5f236b54b16e4f66aa044 
	I0116 03:49:29.444142  507510 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0116 03:49:29.444278  507510 cni.go:84] Creating CNI manager for ""
	I0116 03:49:29.444302  507510 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:49:29.446569  507510 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 03:49:29.447957  507510 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 03:49:29.457418  507510 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 03:49:29.478015  507510 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0116 03:49:29.478130  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:29.478135  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=6e8fa5f64d0e7272be43ff25ed3826261f0a2578 minikube.k8s.io/name=old-k8s-version-696770 minikube.k8s.io/updated_at=2024_01_16T03_49_29_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:29.070681  507257 out.go:204]   - Generating certificates and keys ...
	I0116 03:49:29.070805  507257 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0116 03:49:29.070882  507257 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0116 03:49:29.071007  507257 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0116 03:49:29.071108  507257 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0116 03:49:29.071243  507257 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0116 03:49:29.071320  507257 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0116 03:49:29.071422  507257 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0116 03:49:29.071497  507257 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0116 03:49:29.071928  507257 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0116 03:49:29.074454  507257 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0116 03:49:29.076202  507257 kubeadm.go:322] [certs] Using the existing "sa" key
	I0116 03:49:29.076435  507257 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0116 03:49:29.360527  507257 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0116 03:49:29.779361  507257 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0116 03:49:29.976749  507257 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0116 03:49:30.075605  507257 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0116 03:49:30.076375  507257 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0116 03:49:30.079235  507257 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0116 03:49:30.081497  507257 out.go:204]   - Booting up control plane ...
	I0116 03:49:30.081645  507257 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0116 03:49:30.082340  507257 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0116 03:49:30.083349  507257 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0116 03:49:30.103660  507257 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0116 03:49:30.104863  507257 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0116 03:49:30.104924  507257 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0116 03:49:30.229980  507257 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0116 03:49:29.724417  507510 ops.go:34] apiserver oom_adj: -16
	I0116 03:49:29.724549  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:30.224988  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:30.725451  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:31.225287  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:31.724689  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:32.224984  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:32.724769  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:33.225547  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:33.724874  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:34.225301  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:34.725134  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:35.224977  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:35.724998  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:36.225495  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:36.725043  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:37.224700  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:37.725397  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:38.225311  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:38.725308  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:39.224885  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:38.732431  507257 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.502537 seconds
	I0116 03:49:38.732591  507257 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0116 03:49:38.766319  507257 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0116 03:49:39.312926  507257 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0116 03:49:39.313225  507257 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-615980 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0116 03:49:39.836927  507257 kubeadm.go:322] [bootstrap-token] Using token: 8bzdm1.4lwyoxck7xjn6vqr
	I0116 03:49:39.838931  507257 out.go:204]   - Configuring RBAC rules ...
	I0116 03:49:39.839093  507257 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0116 03:49:39.850909  507257 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0116 03:49:39.873417  507257 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0116 03:49:39.879093  507257 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0116 03:49:39.883914  507257 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0116 03:49:39.889130  507257 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0116 03:49:39.910444  507257 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0116 03:49:40.235572  507257 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0116 03:49:40.334951  507257 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0116 03:49:40.335000  507257 kubeadm.go:322] 
	I0116 03:49:40.335092  507257 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0116 03:49:40.335103  507257 kubeadm.go:322] 
	I0116 03:49:40.335212  507257 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0116 03:49:40.335222  507257 kubeadm.go:322] 
	I0116 03:49:40.335266  507257 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0116 03:49:40.335353  507257 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0116 03:49:40.335421  507257 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0116 03:49:40.335430  507257 kubeadm.go:322] 
	I0116 03:49:40.335504  507257 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0116 03:49:40.335513  507257 kubeadm.go:322] 
	I0116 03:49:40.335598  507257 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0116 03:49:40.335618  507257 kubeadm.go:322] 
	I0116 03:49:40.335690  507257 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0116 03:49:40.335793  507257 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0116 03:49:40.335891  507257 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0116 03:49:40.335904  507257 kubeadm.go:322] 
	I0116 03:49:40.336008  507257 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0116 03:49:40.336128  507257 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0116 03:49:40.336143  507257 kubeadm.go:322] 
	I0116 03:49:40.336262  507257 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 8bzdm1.4lwyoxck7xjn6vqr \
	I0116 03:49:40.336427  507257 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ab75761e1119a1709a216b682cd7b1dcce0641115df5f236b54b16e4f66aa044 \
	I0116 03:49:40.336456  507257 kubeadm.go:322] 	--control-plane 
	I0116 03:49:40.336463  507257 kubeadm.go:322] 
	I0116 03:49:40.336594  507257 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0116 03:49:40.336611  507257 kubeadm.go:322] 
	I0116 03:49:40.336744  507257 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 8bzdm1.4lwyoxck7xjn6vqr \
	I0116 03:49:40.336876  507257 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ab75761e1119a1709a216b682cd7b1dcce0641115df5f236b54b16e4f66aa044 
	I0116 03:49:40.337377  507257 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0116 03:49:40.337421  507257 cni.go:84] Creating CNI manager for ""
	I0116 03:49:40.337432  507257 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:49:40.340415  507257 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 03:49:40.341952  507257 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 03:49:40.376620  507257 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 03:49:40.459091  507257 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0116 03:49:40.459177  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:40.459233  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=6e8fa5f64d0e7272be43ff25ed3826261f0a2578 minikube.k8s.io/name=embed-certs-615980 minikube.k8s.io/updated_at=2024_01_16T03_49_40_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:40.524693  507257 ops.go:34] apiserver oom_adj: -16
	I0116 03:49:40.917890  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:39.725272  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:40.225380  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:40.725272  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:41.225258  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:41.725525  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:42.225270  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:42.725463  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:43.224674  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:43.724904  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:44.224946  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:44.725197  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:44.843354  507510 kubeadm.go:1088] duration metric: took 15.365308355s to wait for elevateKubeSystemPrivileges.
	I0116 03:49:44.843465  507510 kubeadm.go:406] StartCluster complete in 5m48.250275121s
	I0116 03:49:44.843545  507510 settings.go:142] acquiring lock: {Name:mk3ded99c06d285b174e76622bb0741c8dd2d8fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:49:44.843708  507510 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17965-468241/kubeconfig
	I0116 03:49:44.846444  507510 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-468241/kubeconfig: {Name:mkac3c63bbcba9b4fa1bea3480797757d703fb9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:49:44.846814  507510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0116 03:49:44.846959  507510 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0116 03:49:44.847043  507510 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-696770"
	I0116 03:49:44.847067  507510 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-696770"
	I0116 03:49:44.847065  507510 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-696770"
	W0116 03:49:44.847076  507510 addons.go:243] addon storage-provisioner should already be in state true
	I0116 03:49:44.847079  507510 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-696770"
	I0116 03:49:44.847099  507510 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-696770"
	I0116 03:49:44.847108  507510 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-696770"
	W0116 03:49:44.847130  507510 addons.go:243] addon metrics-server should already be in state true
	I0116 03:49:44.847152  507510 host.go:66] Checking if "old-k8s-version-696770" exists ...
	I0116 03:49:44.847087  507510 config.go:182] Loaded profile config "old-k8s-version-696770": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0116 03:49:44.847178  507510 host.go:66] Checking if "old-k8s-version-696770" exists ...
	I0116 03:49:44.847548  507510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:49:44.847568  507510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:49:44.847579  507510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:49:44.847594  507510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:49:44.847605  507510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:49:44.847632  507510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:49:44.865585  507510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36045
	I0116 03:49:44.865597  507510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45289
	I0116 03:49:44.865592  507510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39579
	I0116 03:49:44.866119  507510 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:49:44.866200  507510 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:49:44.866352  507510 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:49:44.867018  507510 main.go:141] libmachine: Using API Version  1
	I0116 03:49:44.867040  507510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:49:44.867043  507510 main.go:141] libmachine: Using API Version  1
	I0116 03:49:44.867051  507510 main.go:141] libmachine: Using API Version  1
	I0116 03:49:44.867071  507510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:49:44.867091  507510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:49:44.867481  507510 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:49:44.867557  507510 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:49:44.867711  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetState
	I0116 03:49:44.867929  507510 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:49:44.868184  507510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:49:44.868215  507510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:49:44.868486  507510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:49:44.868519  507510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:49:44.872747  507510 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-696770"
	W0116 03:49:44.872781  507510 addons.go:243] addon default-storageclass should already be in state true
	I0116 03:49:44.872816  507510 host.go:66] Checking if "old-k8s-version-696770" exists ...
	I0116 03:49:44.873264  507510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:49:44.873308  507510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:49:44.888049  507510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45943
	I0116 03:49:44.890481  507510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37259
	I0116 03:49:44.890990  507510 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:49:44.891285  507510 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:49:44.891567  507510 main.go:141] libmachine: Using API Version  1
	I0116 03:49:44.891582  507510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:49:44.891846  507510 main.go:141] libmachine: Using API Version  1
	I0116 03:49:44.891865  507510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:49:44.892307  507510 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:49:44.892510  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetState
	I0116 03:49:44.892575  507510 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:49:44.892760  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetState
	I0116 03:49:44.894812  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .DriverName
	I0116 03:49:44.895060  507510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37701
	I0116 03:49:44.896571  507510 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0116 03:49:44.895272  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .DriverName
	I0116 03:49:44.895678  507510 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:49:44.898051  507510 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0116 03:49:44.898074  507510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0116 03:49:44.899552  507510 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:49:44.897299  507510 main.go:141] libmachine: Using API Version  1
	I0116 03:49:44.898096  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHHostname
	I0116 03:49:44.901091  507510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:49:44.901216  507510 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 03:49:44.901234  507510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0116 03:49:44.901256  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHHostname
	I0116 03:49:44.902226  507510 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:49:44.902866  507510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:49:44.902908  507510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:49:44.905915  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:49:44.906022  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:49:44.906456  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:20:1a", ip: ""} in network mk-old-k8s-version-696770: {Iface:virbr3 ExpiryTime:2024-01-16 04:43:38 +0000 UTC Type:0 Mac:52:54:00:37:20:1a Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:old-k8s-version-696770 Clientid:01:52:54:00:37:20:1a}
	I0116 03:49:44.906482  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined IP address 192.168.61.167 and MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:49:44.906775  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHPort
	I0116 03:49:44.906851  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:20:1a", ip: ""} in network mk-old-k8s-version-696770: {Iface:virbr3 ExpiryTime:2024-01-16 04:43:38 +0000 UTC Type:0 Mac:52:54:00:37:20:1a Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:old-k8s-version-696770 Clientid:01:52:54:00:37:20:1a}
	I0116 03:49:44.906892  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHPort
	I0116 03:49:44.906941  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined IP address 192.168.61.167 and MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:49:44.907116  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHKeyPath
	I0116 03:49:44.907254  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHKeyPath
	I0116 03:49:44.907324  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHUsername
	I0116 03:49:44.907416  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHUsername
	I0116 03:49:44.907471  507510 sshutil.go:53] new ssh client: &{IP:192.168.61.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/old-k8s-version-696770/id_rsa Username:docker}
	I0116 03:49:44.908078  507510 sshutil.go:53] new ssh client: &{IP:192.168.61.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/old-k8s-version-696770/id_rsa Username:docker}
	I0116 03:49:44.925689  507510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38987
	I0116 03:49:44.926190  507510 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:49:44.926847  507510 main.go:141] libmachine: Using API Version  1
	I0116 03:49:44.926870  507510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:49:44.927322  507510 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:49:44.927545  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetState
	I0116 03:49:44.929553  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .DriverName
	I0116 03:49:44.930008  507510 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0116 03:49:44.930027  507510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0116 03:49:44.930049  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHHostname
	I0116 03:49:44.933353  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:49:44.933768  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:20:1a", ip: ""} in network mk-old-k8s-version-696770: {Iface:virbr3 ExpiryTime:2024-01-16 04:43:38 +0000 UTC Type:0 Mac:52:54:00:37:20:1a Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:old-k8s-version-696770 Clientid:01:52:54:00:37:20:1a}
	I0116 03:49:44.933799  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined IP address 192.168.61.167 and MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:49:44.933975  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHPort
	I0116 03:49:44.934184  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHKeyPath
	I0116 03:49:44.934277  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHUsername
	I0116 03:49:44.934374  507510 sshutil.go:53] new ssh client: &{IP:192.168.61.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/old-k8s-version-696770/id_rsa Username:docker}
	I0116 03:49:45.044743  507510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 03:49:45.073179  507510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0116 03:49:45.073426  507510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0116 03:49:45.095360  507510 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0116 03:49:45.095383  507510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0116 03:49:45.162632  507510 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0116 03:49:45.162661  507510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0116 03:49:45.252628  507510 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 03:49:45.252665  507510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0116 03:49:45.325535  507510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 03:49:45.533499  507510 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-696770" context rescaled to 1 replicas
	I0116 03:49:45.533553  507510 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.167 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 03:49:45.536655  507510 out.go:177] * Verifying Kubernetes components...
	I0116 03:49:41.418664  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:41.918459  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:42.418296  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:42.918119  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:43.418565  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:43.918746  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:44.418812  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:44.918603  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:45.418865  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:45.918104  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:45.538565  507510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:49:46.390448  507510 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.3456663s)
	I0116 03:49:46.390513  507510 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.31729292s)
	I0116 03:49:46.390536  507510 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.317072847s)
	I0116 03:49:46.390556  507510 main.go:141] libmachine: Making call to close driver server
	I0116 03:49:46.390520  507510 main.go:141] libmachine: Making call to close driver server
	I0116 03:49:46.390573  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .Close
	I0116 03:49:46.390595  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .Close
	I0116 03:49:46.390559  507510 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0116 03:49:46.391000  507510 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:49:46.391023  507510 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:49:46.391035  507510 main.go:141] libmachine: Making call to close driver server
	I0116 03:49:46.391040  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | Closing plugin on server side
	I0116 03:49:46.391006  507510 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:49:46.391059  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | Closing plugin on server side
	I0116 03:49:46.391062  507510 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:49:46.391044  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .Close
	I0116 03:49:46.391075  507510 main.go:141] libmachine: Making call to close driver server
	I0116 03:49:46.391083  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .Close
	I0116 03:49:46.391314  507510 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:49:46.391332  507510 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:49:46.391594  507510 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:49:46.391625  507510 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:49:46.465666  507510 main.go:141] libmachine: Making call to close driver server
	I0116 03:49:46.465688  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .Close
	I0116 03:49:46.466107  507510 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:49:46.466127  507510 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:49:46.597926  507510 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.05930194s)
	I0116 03:49:46.597988  507510 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-696770" to be "Ready" ...
	I0116 03:49:46.597925  507510 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.272324444s)
	I0116 03:49:46.598099  507510 main.go:141] libmachine: Making call to close driver server
	I0116 03:49:46.598123  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .Close
	I0116 03:49:46.598503  507510 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:49:46.598527  507510 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:49:46.598531  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | Closing plugin on server side
	I0116 03:49:46.598539  507510 main.go:141] libmachine: Making call to close driver server
	I0116 03:49:46.598549  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .Close
	I0116 03:49:46.598884  507510 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:49:46.598903  507510 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:49:46.598917  507510 addons.go:470] Verifying addon metrics-server=true in "old-k8s-version-696770"
	I0116 03:49:46.600845  507510 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0116 03:49:46.602484  507510 addons.go:505] enable addons completed in 1.755527621s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0116 03:49:46.612929  507510 node_ready.go:49] node "old-k8s-version-696770" has status "Ready":"True"
	I0116 03:49:46.612962  507510 node_ready.go:38] duration metric: took 14.959317ms waiting for node "old-k8s-version-696770" to be "Ready" ...
	I0116 03:49:46.612975  507510 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:49:46.616466  507510 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-h85tj" in "kube-system" namespace to be "Ready" ...
	I0116 03:49:48.628130  507510 pod_ready.go:102] pod "coredns-5644d7b6d9-h85tj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:49:46.418268  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:46.917976  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:47.418645  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:47.917927  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:48.417920  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:48.917939  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:49.418387  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:49.918203  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:50.417930  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:50.918518  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:51.418036  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:51.917981  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:52.418293  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:52.635961  507257 kubeadm.go:1088] duration metric: took 12.176857981s to wait for elevateKubeSystemPrivileges.
	I0116 03:49:52.636014  507257 kubeadm.go:406] StartCluster complete in 5m10.892359223s
	I0116 03:49:52.636054  507257 settings.go:142] acquiring lock: {Name:mk3ded99c06d285b174e76622bb0741c8dd2d8fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:49:52.636186  507257 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17965-468241/kubeconfig
	I0116 03:49:52.638885  507257 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-468241/kubeconfig: {Name:mkac3c63bbcba9b4fa1bea3480797757d703fb9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:49:52.639229  507257 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0116 03:49:52.639345  507257 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0116 03:49:52.639439  507257 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-615980"
	I0116 03:49:52.639461  507257 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-615980"
	I0116 03:49:52.639458  507257 addons.go:69] Setting default-storageclass=true in profile "embed-certs-615980"
	W0116 03:49:52.639469  507257 addons.go:243] addon storage-provisioner should already be in state true
	I0116 03:49:52.639482  507257 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-615980"
	I0116 03:49:52.639504  507257 config.go:182] Loaded profile config "embed-certs-615980": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 03:49:52.639541  507257 host.go:66] Checking if "embed-certs-615980" exists ...
	I0116 03:49:52.639562  507257 addons.go:69] Setting metrics-server=true in profile "embed-certs-615980"
	I0116 03:49:52.639579  507257 addons.go:234] Setting addon metrics-server=true in "embed-certs-615980"
	W0116 03:49:52.639591  507257 addons.go:243] addon metrics-server should already be in state true
	I0116 03:49:52.639639  507257 host.go:66] Checking if "embed-certs-615980" exists ...
	I0116 03:49:52.639965  507257 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:49:52.639984  507257 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:49:52.640007  507257 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:49:52.640023  507257 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:49:52.640084  507257 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:49:52.640118  507257 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:49:52.660468  507257 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36595
	I0116 03:49:52.660653  507257 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44367
	I0116 03:49:52.661058  507257 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:49:52.661184  507257 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:49:52.661685  507257 main.go:141] libmachine: Using API Version  1
	I0116 03:49:52.661709  507257 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:49:52.661768  507257 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40717
	I0116 03:49:52.661855  507257 main.go:141] libmachine: Using API Version  1
	I0116 03:49:52.661871  507257 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:49:52.662141  507257 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:49:52.662207  507257 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:49:52.662425  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetState
	I0116 03:49:52.662480  507257 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:49:52.662858  507257 main.go:141] libmachine: Using API Version  1
	I0116 03:49:52.662875  507257 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:49:52.663301  507257 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:49:52.663337  507257 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:49:52.663413  507257 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:49:52.663956  507257 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:49:52.663985  507257 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:49:52.666163  507257 addons.go:234] Setting addon default-storageclass=true in "embed-certs-615980"
	W0116 03:49:52.666190  507257 addons.go:243] addon default-storageclass should already be in state true
	I0116 03:49:52.666224  507257 host.go:66] Checking if "embed-certs-615980" exists ...
	I0116 03:49:52.666630  507257 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:49:52.666672  507257 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:49:52.682228  507257 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37799
	I0116 03:49:52.682743  507257 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:49:52.683402  507257 main.go:141] libmachine: Using API Version  1
	I0116 03:49:52.683425  507257 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:49:52.683719  507257 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36773
	I0116 03:49:52.683893  507257 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:49:52.684125  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetState
	I0116 03:49:52.684589  507257 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:49:52.685108  507257 main.go:141] libmachine: Using API Version  1
	I0116 03:49:52.685128  507257 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:49:52.685607  507257 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:49:52.685627  507257 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42767
	I0116 03:49:52.686073  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetState
	I0116 03:49:52.686329  507257 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:49:52.686781  507257 main.go:141] libmachine: Using API Version  1
	I0116 03:49:52.686804  507257 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:49:52.687167  507257 main.go:141] libmachine: (embed-certs-615980) Calling .DriverName
	I0116 03:49:52.687213  507257 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:49:52.689840  507257 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:49:52.687751  507257 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:49:52.689319  507257 main.go:141] libmachine: (embed-certs-615980) Calling .DriverName
	I0116 03:49:52.691584  507257 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 03:49:52.691595  507257 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0116 03:49:52.691610  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHHostname
	I0116 03:49:52.691655  507257 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:49:52.693170  507257 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0116 03:49:52.694465  507257 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0116 03:49:52.694478  507257 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0116 03:49:52.694495  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHHostname
	I0116 03:49:52.705398  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:49:52.705440  507257 main.go:141] libmachine: (embed-certs-615980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:a6:40", ip: ""} in network mk-embed-certs-615980: {Iface:virbr4 ExpiryTime:2024-01-16 04:44:24 +0000 UTC Type:0 Mac:52:54:00:d4:a6:40 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-615980 Clientid:01:52:54:00:d4:a6:40}
	I0116 03:49:52.705440  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:49:52.705469  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHPort
	I0116 03:49:52.705475  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined IP address 192.168.72.159 and MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:49:52.705501  507257 main.go:141] libmachine: (embed-certs-615980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:a6:40", ip: ""} in network mk-embed-certs-615980: {Iface:virbr4 ExpiryTime:2024-01-16 04:44:24 +0000 UTC Type:0 Mac:52:54:00:d4:a6:40 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-615980 Clientid:01:52:54:00:d4:a6:40}
	I0116 03:49:52.705516  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined IP address 192.168.72.159 and MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:49:52.705403  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHPort
	I0116 03:49:52.705751  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHKeyPath
	I0116 03:49:52.705813  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHKeyPath
	I0116 03:49:52.705956  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHUsername
	I0116 03:49:52.706078  507257 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/embed-certs-615980/id_rsa Username:docker}
	I0116 03:49:52.706839  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHUsername
	I0116 03:49:52.707045  507257 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/embed-certs-615980/id_rsa Username:docker}
	I0116 03:49:52.713247  507257 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33775
	I0116 03:49:52.714047  507257 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:49:52.714725  507257 main.go:141] libmachine: Using API Version  1
	I0116 03:49:52.714742  507257 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:49:52.715212  507257 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:49:52.715442  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetState
	I0116 03:49:52.717568  507257 main.go:141] libmachine: (embed-certs-615980) Calling .DriverName
	I0116 03:49:52.717813  507257 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0116 03:49:52.717824  507257 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0116 03:49:52.717839  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHHostname
	I0116 03:49:52.720720  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:49:52.721189  507257 main.go:141] libmachine: (embed-certs-615980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:a6:40", ip: ""} in network mk-embed-certs-615980: {Iface:virbr4 ExpiryTime:2024-01-16 04:44:24 +0000 UTC Type:0 Mac:52:54:00:d4:a6:40 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-615980 Clientid:01:52:54:00:d4:a6:40}
	I0116 03:49:52.721205  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined IP address 192.168.72.159 and MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:49:52.721414  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHPort
	I0116 03:49:52.721573  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHKeyPath
	I0116 03:49:52.721724  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHUsername
	I0116 03:49:52.721814  507257 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/embed-certs-615980/id_rsa Username:docker}
	I0116 03:49:52.899474  507257 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0116 03:49:52.971597  507257 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0116 03:49:52.971623  507257 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0116 03:49:52.971955  507257 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 03:49:53.029724  507257 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0116 03:49:53.051410  507257 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0116 03:49:53.051439  507257 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0116 03:49:53.121058  507257 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 03:49:53.121088  507257 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0116 03:49:53.179049  507257 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-615980" context rescaled to 1 replicas
	I0116 03:49:53.179098  507257 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.159 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 03:49:53.181191  507257 out.go:177] * Verifying Kubernetes components...
	I0116 03:49:50.633148  507510 pod_ready.go:92] pod "coredns-5644d7b6d9-h85tj" in "kube-system" namespace has status "Ready":"True"
	I0116 03:49:50.633179  507510 pod_ready.go:81] duration metric: took 4.016682348s waiting for pod "coredns-5644d7b6d9-h85tj" in "kube-system" namespace to be "Ready" ...
	I0116 03:49:50.633194  507510 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rc8xt" in "kube-system" namespace to be "Ready" ...
	I0116 03:49:50.648707  507510 pod_ready.go:92] pod "kube-proxy-rc8xt" in "kube-system" namespace has status "Ready":"True"
	I0116 03:49:50.648737  507510 pod_ready.go:81] duration metric: took 15.535257ms waiting for pod "kube-proxy-rc8xt" in "kube-system" namespace to be "Ready" ...
	I0116 03:49:50.648752  507510 pod_ready.go:38] duration metric: took 4.035762868s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:49:50.648770  507510 api_server.go:52] waiting for apiserver process to appear ...
	I0116 03:49:50.648842  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:49:50.665917  507510 api_server.go:72] duration metric: took 5.1323051s to wait for apiserver process to appear ...
	I0116 03:49:50.665954  507510 api_server.go:88] waiting for apiserver healthz status ...
	I0116 03:49:50.665982  507510 api_server.go:253] Checking apiserver healthz at https://192.168.61.167:8443/healthz ...
	I0116 03:49:50.672790  507510 api_server.go:279] https://192.168.61.167:8443/healthz returned 200:
	ok
	I0116 03:49:50.674024  507510 api_server.go:141] control plane version: v1.16.0
	I0116 03:49:50.674059  507510 api_server.go:131] duration metric: took 8.096153ms to wait for apiserver health ...
	I0116 03:49:50.674071  507510 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 03:49:50.677835  507510 system_pods.go:59] 4 kube-system pods found
	I0116 03:49:50.677871  507510 system_pods.go:61] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:49:50.677878  507510 system_pods.go:61] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:49:50.677887  507510 system_pods.go:61] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:49:50.677894  507510 system_pods.go:61] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:49:50.677905  507510 system_pods.go:74] duration metric: took 3.826308ms to wait for pod list to return data ...
	I0116 03:49:50.677914  507510 default_sa.go:34] waiting for default service account to be created ...
	I0116 03:49:50.680932  507510 default_sa.go:45] found service account: "default"
	I0116 03:49:50.680964  507510 default_sa.go:55] duration metric: took 3.041693ms for default service account to be created ...
	I0116 03:49:50.680975  507510 system_pods.go:116] waiting for k8s-apps to be running ...
	I0116 03:49:50.684730  507510 system_pods.go:86] 4 kube-system pods found
	I0116 03:49:50.684759  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:49:50.684767  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:49:50.684778  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:49:50.684785  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:49:50.684811  507510 retry.go:31] will retry after 238.551043ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:49:50.928725  507510 system_pods.go:86] 4 kube-system pods found
	I0116 03:49:50.928761  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:49:50.928768  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:49:50.928779  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:49:50.928786  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:49:50.928816  507510 retry.go:31] will retry after 246.771125ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:49:51.180688  507510 system_pods.go:86] 4 kube-system pods found
	I0116 03:49:51.180727  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:49:51.180736  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:49:51.180747  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:49:51.180755  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:49:51.180780  507510 retry.go:31] will retry after 439.966453ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:49:51.625927  507510 system_pods.go:86] 4 kube-system pods found
	I0116 03:49:51.625958  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:49:51.625964  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:49:51.625970  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:49:51.625975  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:49:51.626001  507510 retry.go:31] will retry after 403.213781ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:49:52.035928  507510 system_pods.go:86] 4 kube-system pods found
	I0116 03:49:52.035994  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:49:52.036003  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:49:52.036014  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:49:52.036022  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:49:52.036064  507510 retry.go:31] will retry after 501.701933ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:49:52.543834  507510 system_pods.go:86] 4 kube-system pods found
	I0116 03:49:52.543874  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:49:52.543883  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:49:52.543894  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:49:52.543904  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:49:52.543929  507510 retry.go:31] will retry after 898.357774ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:49:53.447323  507510 system_pods.go:86] 4 kube-system pods found
	I0116 03:49:53.447356  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:49:53.447364  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:49:53.447373  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:49:53.447382  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:49:53.447405  507510 retry.go:31] will retry after 928.816907ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:49:54.382017  507510 system_pods.go:86] 4 kube-system pods found
	I0116 03:49:54.382046  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:49:54.382052  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:49:54.382058  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:49:54.382065  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:49:54.382085  507510 retry.go:31] will retry after 935.220919ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:49:53.183129  507257 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:49:53.296441  507257 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 03:49:55.162183  507257 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.262649875s)
	I0116 03:49:55.162237  507257 start.go:929] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0116 03:49:55.516930  507257 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.544937669s)
	I0116 03:49:55.516988  507257 main.go:141] libmachine: Making call to close driver server
	I0116 03:49:55.517002  507257 main.go:141] libmachine: (embed-certs-615980) Calling .Close
	I0116 03:49:55.517046  507257 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.487276988s)
	I0116 03:49:55.517101  507257 main.go:141] libmachine: Making call to close driver server
	I0116 03:49:55.517108  507257 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.333941337s)
	I0116 03:49:55.517114  507257 main.go:141] libmachine: (embed-certs-615980) Calling .Close
	I0116 03:49:55.517135  507257 node_ready.go:35] waiting up to 6m0s for node "embed-certs-615980" to be "Ready" ...
	I0116 03:49:55.517496  507257 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:49:55.517496  507257 main.go:141] libmachine: (embed-certs-615980) DBG | Closing plugin on server side
	I0116 03:49:55.517512  507257 main.go:141] libmachine: (embed-certs-615980) DBG | Closing plugin on server side
	I0116 03:49:55.517520  507257 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:49:55.517535  507257 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:49:55.517546  507257 main.go:141] libmachine: Making call to close driver server
	I0116 03:49:55.517548  507257 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:49:55.517559  507257 main.go:141] libmachine: (embed-certs-615980) Calling .Close
	I0116 03:49:55.517566  507257 main.go:141] libmachine: Making call to close driver server
	I0116 03:49:55.517577  507257 main.go:141] libmachine: (embed-certs-615980) Calling .Close
	I0116 03:49:55.517902  507257 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:49:55.517916  507257 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:49:55.517920  507257 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:49:55.517926  507257 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:49:55.537242  507257 node_ready.go:49] node "embed-certs-615980" has status "Ready":"True"
	I0116 03:49:55.537273  507257 node_ready.go:38] duration metric: took 20.119969ms waiting for node "embed-certs-615980" to be "Ready" ...
	I0116 03:49:55.537282  507257 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:49:55.567823  507257 main.go:141] libmachine: Making call to close driver server
	I0116 03:49:55.567859  507257 main.go:141] libmachine: (embed-certs-615980) Calling .Close
	I0116 03:49:55.568264  507257 main.go:141] libmachine: (embed-certs-615980) DBG | Closing plugin on server side
	I0116 03:49:55.568301  507257 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:49:55.568324  507257 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:49:55.571667  507257 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-hxsvz" in "kube-system" namespace to be "Ready" ...
	I0116 03:49:55.962821  507257 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.666330022s)
	I0116 03:49:55.962896  507257 main.go:141] libmachine: Making call to close driver server
	I0116 03:49:55.962915  507257 main.go:141] libmachine: (embed-certs-615980) Calling .Close
	I0116 03:49:55.963282  507257 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:49:55.963304  507257 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:49:55.963317  507257 main.go:141] libmachine: Making call to close driver server
	I0116 03:49:55.963328  507257 main.go:141] libmachine: (embed-certs-615980) Calling .Close
	I0116 03:49:55.964155  507257 main.go:141] libmachine: (embed-certs-615980) DBG | Closing plugin on server side
	I0116 03:49:55.964178  507257 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:49:55.964190  507257 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:49:55.964209  507257 addons.go:470] Verifying addon metrics-server=true in "embed-certs-615980"
	I0116 03:49:55.967489  507257 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0116 03:49:55.969099  507257 addons.go:505] enable addons completed in 3.329750862s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0116 03:49:57.085999  507257 pod_ready.go:92] pod "coredns-5dd5756b68-hxsvz" in "kube-system" namespace has status "Ready":"True"
	I0116 03:49:57.086034  507257 pod_ready.go:81] duration metric: took 1.514340062s waiting for pod "coredns-5dd5756b68-hxsvz" in "kube-system" namespace to be "Ready" ...
	I0116 03:49:57.086048  507257 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-twbhh" in "kube-system" namespace to be "Ready" ...
	I0116 03:49:57.110886  507257 pod_ready.go:92] pod "coredns-5dd5756b68-twbhh" in "kube-system" namespace has status "Ready":"True"
	I0116 03:49:57.110920  507257 pod_ready.go:81] duration metric: took 24.862165ms waiting for pod "coredns-5dd5756b68-twbhh" in "kube-system" namespace to be "Ready" ...
	I0116 03:49:57.110934  507257 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-615980" in "kube-system" namespace to be "Ready" ...
	I0116 03:49:57.122556  507257 pod_ready.go:92] pod "etcd-embed-certs-615980" in "kube-system" namespace has status "Ready":"True"
	I0116 03:49:57.122588  507257 pod_ready.go:81] duration metric: took 11.643561ms waiting for pod "etcd-embed-certs-615980" in "kube-system" namespace to be "Ready" ...
	I0116 03:49:57.122601  507257 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-615980" in "kube-system" namespace to be "Ready" ...
	I0116 03:49:57.134402  507257 pod_ready.go:92] pod "kube-apiserver-embed-certs-615980" in "kube-system" namespace has status "Ready":"True"
	I0116 03:49:57.134432  507257 pod_ready.go:81] duration metric: took 11.823016ms waiting for pod "kube-apiserver-embed-certs-615980" in "kube-system" namespace to be "Ready" ...
	I0116 03:49:57.134442  507257 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-615980" in "kube-system" namespace to be "Ready" ...
	I0116 03:49:57.152947  507257 pod_ready.go:92] pod "kube-controller-manager-embed-certs-615980" in "kube-system" namespace has status "Ready":"True"
	I0116 03:49:57.152984  507257 pod_ready.go:81] duration metric: took 18.533642ms waiting for pod "kube-controller-manager-embed-certs-615980" in "kube-system" namespace to be "Ready" ...
	I0116 03:49:57.153000  507257 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8rkb5" in "kube-system" namespace to be "Ready" ...
	I0116 03:49:57.921983  507257 pod_ready.go:92] pod "kube-proxy-8rkb5" in "kube-system" namespace has status "Ready":"True"
	I0116 03:49:57.922016  507257 pod_ready.go:81] duration metric: took 769.007434ms waiting for pod "kube-proxy-8rkb5" in "kube-system" namespace to be "Ready" ...
	I0116 03:49:57.922028  507257 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-615980" in "kube-system" namespace to be "Ready" ...
	I0116 03:49:58.322237  507257 pod_ready.go:92] pod "kube-scheduler-embed-certs-615980" in "kube-system" namespace has status "Ready":"True"
	I0116 03:49:58.322267  507257 pod_ready.go:81] duration metric: took 400.23243ms waiting for pod "kube-scheduler-embed-certs-615980" in "kube-system" namespace to be "Ready" ...
	I0116 03:49:58.322280  507257 pod_ready.go:38] duration metric: took 2.78498776s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:49:58.322295  507257 api_server.go:52] waiting for apiserver process to appear ...
	I0116 03:49:58.322357  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:49:58.338527  507257 api_server.go:72] duration metric: took 5.159388866s to wait for apiserver process to appear ...
	I0116 03:49:58.338553  507257 api_server.go:88] waiting for apiserver healthz status ...
	I0116 03:49:58.338575  507257 api_server.go:253] Checking apiserver healthz at https://192.168.72.159:8443/healthz ...
	I0116 03:49:58.345758  507257 api_server.go:279] https://192.168.72.159:8443/healthz returned 200:
	ok
	I0116 03:49:58.347531  507257 api_server.go:141] control plane version: v1.28.4
	I0116 03:49:58.347559  507257 api_server.go:131] duration metric: took 8.999388ms to wait for apiserver health ...
	I0116 03:49:58.347573  507257 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 03:49:58.527633  507257 system_pods.go:59] 9 kube-system pods found
	I0116 03:49:58.527676  507257 system_pods.go:61] "coredns-5dd5756b68-hxsvz" [de7da02c-649b-4d29-8a89-5642105b6049] Running
	I0116 03:49:58.527685  507257 system_pods.go:61] "coredns-5dd5756b68-twbhh" [9be49c16-f213-47da-83f4-90fc392eb49e] Running
	I0116 03:49:58.527692  507257 system_pods.go:61] "etcd-embed-certs-615980" [2098148f-0cac-48ce-a607-381b13334438] Running
	I0116 03:49:58.527704  507257 system_pods.go:61] "kube-apiserver-embed-certs-615980" [3d49b47b-da34-4f4d-a8d3-758c0d28c034] Running
	I0116 03:49:58.527711  507257 system_pods.go:61] "kube-controller-manager-embed-certs-615980" [c4f7946d-907d-42ad-8e84-8fa337111688] Running
	I0116 03:49:58.527718  507257 system_pods.go:61] "kube-proxy-8rkb5" [322fae38-3b29-4135-ba3f-c0ff8bda1e4a] Running
	I0116 03:49:58.527725  507257 system_pods.go:61] "kube-scheduler-embed-certs-615980" [882f322f-8686-40a4-a613-e9855ccfb56e] Running
	I0116 03:49:58.527736  507257 system_pods.go:61] "metrics-server-57f55c9bc5-fc7tx" [14a38c13-7a9e-4548-9654-c568ede29e0f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:49:58.527748  507257 system_pods.go:61] "storage-provisioner" [1ce752ad-ce91-462e-ab2b-2af64064eb40] Running
	I0116 03:49:58.527757  507257 system_pods.go:74] duration metric: took 180.177482ms to wait for pod list to return data ...
	I0116 03:49:58.527771  507257 default_sa.go:34] waiting for default service account to be created ...
	I0116 03:49:58.721717  507257 default_sa.go:45] found service account: "default"
	I0116 03:49:58.721749  507257 default_sa.go:55] duration metric: took 193.967755ms for default service account to be created ...
	I0116 03:49:58.721758  507257 system_pods.go:116] waiting for k8s-apps to be running ...
	I0116 03:49:58.925915  507257 system_pods.go:86] 9 kube-system pods found
	I0116 03:49:58.925957  507257 system_pods.go:89] "coredns-5dd5756b68-hxsvz" [de7da02c-649b-4d29-8a89-5642105b6049] Running
	I0116 03:49:58.925964  507257 system_pods.go:89] "coredns-5dd5756b68-twbhh" [9be49c16-f213-47da-83f4-90fc392eb49e] Running
	I0116 03:49:58.925970  507257 system_pods.go:89] "etcd-embed-certs-615980" [2098148f-0cac-48ce-a607-381b13334438] Running
	I0116 03:49:58.925977  507257 system_pods.go:89] "kube-apiserver-embed-certs-615980" [3d49b47b-da34-4f4d-a8d3-758c0d28c034] Running
	I0116 03:49:58.925987  507257 system_pods.go:89] "kube-controller-manager-embed-certs-615980" [c4f7946d-907d-42ad-8e84-8fa337111688] Running
	I0116 03:49:58.925994  507257 system_pods.go:89] "kube-proxy-8rkb5" [322fae38-3b29-4135-ba3f-c0ff8bda1e4a] Running
	I0116 03:49:58.926040  507257 system_pods.go:89] "kube-scheduler-embed-certs-615980" [882f322f-8686-40a4-a613-e9855ccfb56e] Running
	I0116 03:49:58.926063  507257 system_pods.go:89] "metrics-server-57f55c9bc5-fc7tx" [14a38c13-7a9e-4548-9654-c568ede29e0f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:49:58.926070  507257 system_pods.go:89] "storage-provisioner" [1ce752ad-ce91-462e-ab2b-2af64064eb40] Running
	I0116 03:49:58.926087  507257 system_pods.go:126] duration metric: took 204.321811ms to wait for k8s-apps to be running ...
	I0116 03:49:58.926099  507257 system_svc.go:44] waiting for kubelet service to be running ....
	I0116 03:49:58.926159  507257 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:49:58.940982  507257 system_svc.go:56] duration metric: took 14.86844ms WaitForService to wait for kubelet.
	I0116 03:49:58.941019  507257 kubeadm.go:581] duration metric: took 5.761889406s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0116 03:49:58.941051  507257 node_conditions.go:102] verifying NodePressure condition ...
	I0116 03:49:59.121649  507257 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 03:49:59.121681  507257 node_conditions.go:123] node cpu capacity is 2
	I0116 03:49:59.121694  507257 node_conditions.go:105] duration metric: took 180.636851ms to run NodePressure ...
	I0116 03:49:59.121707  507257 start.go:228] waiting for startup goroutines ...
	I0116 03:49:59.121717  507257 start.go:233] waiting for cluster config update ...
	I0116 03:49:59.121730  507257 start.go:242] writing updated cluster config ...
	I0116 03:49:59.122058  507257 ssh_runner.go:195] Run: rm -f paused
	I0116 03:49:59.177472  507257 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0116 03:49:59.179801  507257 out.go:177] * Done! kubectl is now configured to use "embed-certs-615980" cluster and "default" namespace by default
	I0116 03:49:55.324439  507510 system_pods.go:86] 4 kube-system pods found
	I0116 03:49:55.324471  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:49:55.324477  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:49:55.324484  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:49:55.324489  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:49:55.324509  507510 retry.go:31] will retry after 1.168298317s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:49:56.500050  507510 system_pods.go:86] 4 kube-system pods found
	I0116 03:49:56.500090  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:49:56.500098  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:49:56.500111  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:49:56.500118  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:49:56.500142  507510 retry.go:31] will retry after 1.453657977s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:49:57.961220  507510 system_pods.go:86] 4 kube-system pods found
	I0116 03:49:57.961248  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:49:57.961254  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:49:57.961261  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:49:57.961266  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:49:57.961286  507510 retry.go:31] will retry after 1.763969687s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:49:59.731086  507510 system_pods.go:86] 4 kube-system pods found
	I0116 03:49:59.731112  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:49:59.731117  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:49:59.731123  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:49:59.731129  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:49:59.731147  507510 retry.go:31] will retry after 3.185395035s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:50:02.922897  507510 system_pods.go:86] 4 kube-system pods found
	I0116 03:50:02.922934  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:50:02.922944  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:50:02.922954  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:50:02.922961  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:50:02.922985  507510 retry.go:31] will retry after 4.049428323s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:50:06.978002  507510 system_pods.go:86] 4 kube-system pods found
	I0116 03:50:06.978029  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:50:06.978034  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:50:06.978040  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:50:06.978045  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:50:06.978063  507510 retry.go:31] will retry after 4.626513574s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:50:11.610464  507510 system_pods.go:86] 4 kube-system pods found
	I0116 03:50:11.610499  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:50:11.610507  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:50:11.610517  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:50:11.610524  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:50:11.610550  507510 retry.go:31] will retry after 4.683195792s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:50:16.298843  507510 system_pods.go:86] 4 kube-system pods found
	I0116 03:50:16.298873  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:50:16.298879  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:50:16.298888  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:50:16.298892  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:50:16.298913  507510 retry.go:31] will retry after 8.214175219s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:50:24.520982  507510 system_pods.go:86] 5 kube-system pods found
	I0116 03:50:24.521020  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:50:24.521029  507510 system_pods.go:89] "etcd-old-k8s-version-696770" [255a0b2a-df41-4621-8918-36e1b0c25c24] Pending
	I0116 03:50:24.521033  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:50:24.521040  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:50:24.521045  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:50:24.521067  507510 retry.go:31] will retry after 9.626598035s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:50:34.155753  507510 system_pods.go:86] 5 kube-system pods found
	I0116 03:50:34.155790  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:50:34.155798  507510 system_pods.go:89] "etcd-old-k8s-version-696770" [255a0b2a-df41-4621-8918-36e1b0c25c24] Running
	I0116 03:50:34.155805  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:50:34.155815  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:50:34.155822  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:50:34.155849  507510 retry.go:31] will retry after 13.760629262s: missing components: kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:50:47.923537  507510 system_pods.go:86] 7 kube-system pods found
	I0116 03:50:47.923571  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:50:47.923577  507510 system_pods.go:89] "etcd-old-k8s-version-696770" [255a0b2a-df41-4621-8918-36e1b0c25c24] Running
	I0116 03:50:47.923582  507510 system_pods.go:89] "kube-apiserver-old-k8s-version-696770" [c682b257-d00b-4b4c-8089-cda1b9da538c] Running
	I0116 03:50:47.923585  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:50:47.923589  507510 system_pods.go:89] "kube-scheduler-old-k8s-version-696770" [af271425-aec7-45d9-97c5-9a033f13a41e] Running
	I0116 03:50:47.923599  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:50:47.923603  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:50:47.923621  507510 retry.go:31] will retry after 15.810378345s: missing components: kube-controller-manager
	I0116 03:51:03.742786  507510 system_pods.go:86] 8 kube-system pods found
	I0116 03:51:03.742819  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:51:03.742825  507510 system_pods.go:89] "etcd-old-k8s-version-696770" [255a0b2a-df41-4621-8918-36e1b0c25c24] Running
	I0116 03:51:03.742830  507510 system_pods.go:89] "kube-apiserver-old-k8s-version-696770" [c682b257-d00b-4b4c-8089-cda1b9da538c] Running
	I0116 03:51:03.742835  507510 system_pods.go:89] "kube-controller-manager-old-k8s-version-696770" [87b5ef82-182e-458d-b521-05a36d3d031b] Running
	I0116 03:51:03.742838  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:51:03.742842  507510 system_pods.go:89] "kube-scheduler-old-k8s-version-696770" [af271425-aec7-45d9-97c5-9a033f13a41e] Running
	I0116 03:51:03.742849  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:51:03.742854  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:51:03.742865  507510 system_pods.go:126] duration metric: took 1m13.061883389s to wait for k8s-apps to be running ...
	I0116 03:51:03.742872  507510 system_svc.go:44] waiting for kubelet service to be running ....
	I0116 03:51:03.742921  507510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:51:03.761399  507510 system_svc.go:56] duration metric: took 18.514586ms WaitForService to wait for kubelet.
	I0116 03:51:03.761433  507510 kubeadm.go:581] duration metric: took 1m18.22783177s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0116 03:51:03.761461  507510 node_conditions.go:102] verifying NodePressure condition ...
	I0116 03:51:03.765716  507510 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 03:51:03.765760  507510 node_conditions.go:123] node cpu capacity is 2
	I0116 03:51:03.765777  507510 node_conditions.go:105] duration metric: took 4.309124ms to run NodePressure ...
	I0116 03:51:03.765794  507510 start.go:228] waiting for startup goroutines ...
	I0116 03:51:03.765803  507510 start.go:233] waiting for cluster config update ...
	I0116 03:51:03.765817  507510 start.go:242] writing updated cluster config ...
	I0116 03:51:03.766160  507510 ssh_runner.go:195] Run: rm -f paused
	I0116 03:51:03.822502  507510 start.go:600] kubectl: 1.29.0, cluster: 1.16.0 (minor skew: 13)
	I0116 03:51:03.824687  507510 out.go:177] 
	W0116 03:51:03.826162  507510 out.go:239] ! /usr/local/bin/kubectl is version 1.29.0, which may have incompatibilities with Kubernetes 1.16.0.
	I0116 03:51:03.827659  507510 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0116 03:51:03.829229  507510 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-696770" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Tue 2024-01-16 03:43:37 UTC, ends at Tue 2024-01-16 04:00:05 UTC. --
	Jan 16 04:00:05 old-k8s-version-696770 crio[715]: time="2024-01-16 04:00:05.628113260Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705377605628097151,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:114472,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=54879040-b695-42ac-aa64-a4a2cbbbe879 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 04:00:05 old-k8s-version-696770 crio[715]: time="2024-01-16 04:00:05.628693443Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=82a524cf-6552-4e5b-bef0-01e4cb4a6f2c name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 04:00:05 old-k8s-version-696770 crio[715]: time="2024-01-16 04:00:05.628737077Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=82a524cf-6552-4e5b-bef0-01e4cb4a6f2c name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 04:00:05 old-k8s-version-696770 crio[715]: time="2024-01-16 04:00:05.629021739Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fada0ec84e00786bf019ab1f553ec3d864423c48bb87affed94838adc2503641,PodSandboxId:89368a33b413a031776b03bf7add26e9c79142662e1221fa4cc76f1718d344bb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705376987692443699,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c45ac226-3063-4d53-8a3a-dccca6e8cade,},Annotations:map[string]string{io.kubernetes.container.hash: 823c00b3,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e31b7408ac6d61505c93bd0ee056db05b8065f72033976a79869da0eda891df,PodSandboxId:5d09494a90a0cb05911113aafe4d91d159618e87cab28dee6a6162ef7216a797,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1705376987522398389,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rc8xt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 433f07f2-79e8-48f6-945a-af3dc0060920,},Annotations:map[string]string{io.kubernetes.container.hash: d3f5792e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e647824330362812e08d6745872f94a8bb6a235bdfa053f314b6143f71c061ca,PodSandboxId:89eb0f0d7f5a24ccf7d98b5002c6de23763f95a4495d4f746ecf5e6d6dd831f0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1705376986940017978,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-h85tj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ad3270c-e6c5-4ced-800f-f6e7960097ac,},Annotations:map[string]string{io.kubernetes.container.hash: 538949d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contain
erPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19e1d64306cf954490dc1f2a424f07458029dbafd0c148d61121eb78ffe07f81,PodSandboxId:2fc6ce30e4206ca86317c370088e1505cc72ef648a405aef78dbc31f33d36330,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1705376959826024337,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-696770,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a30650f5c98b0779ac54af241e6784fa,},Annotations:map[st
ring]string{io.kubernetes.container.hash: c56983aa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2348243d3645e74feecf7dbce8cad9836e6b19b215dbd48b3af5ad146519ed8,PodSandboxId:b0ecc1dcc677088cae112b1b2a9d9c4eeb2497163231241d47f262b7492156d1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1705376958167239765,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-696770,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437b
cb4e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ea8576da0bd6430244ffb39b2b3cc282498618fb7d0433d709260bc314ec01e,PodSandboxId:d16982b673cfc28ee712c4e347e21556f5742db17a2c54e531a04b40f063f404,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1705376957992492062,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-696770,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Anno
tations:map[string]string{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d5060e371a626a1d05b1d8a95d487bb85d58e876a3aa66f970430bd665c02b4,PodSandboxId:4e001b6729d354c208b50660b6cff7f6c0731e487a28bf38f458b0d3dcb3e4cb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1705376957324344362,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-696770,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4ce14449bc68e7b0e24764365ae6c5c,},Annotations:map
[string]string{io.kubernetes.container.hash: 476b3afe,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89bde80dffc0fb01a1f5dce6520043ba8f919cd5e304eb1efa323075fa8bf331,PodSandboxId:4e001b6729d354c208b50660b6cff7f6c0731e487a28bf38f458b0d3dcb3e4cb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_EXITED,CreatedAt:1705376649920960106,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-696770,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4ce14449bc68e7b0e24764365ae6c5c,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 476b3afe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=82a524cf-6552-4e5b-bef0-01e4cb4a6f2c name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 04:00:05 old-k8s-version-696770 crio[715]: time="2024-01-16 04:00:05.671024254Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=9255fb7b-ac5b-4b91-a67b-3ccf60329cdd name=/runtime.v1.RuntimeService/Version
	Jan 16 04:00:05 old-k8s-version-696770 crio[715]: time="2024-01-16 04:00:05.671111771Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=9255fb7b-ac5b-4b91-a67b-3ccf60329cdd name=/runtime.v1.RuntimeService/Version
	Jan 16 04:00:05 old-k8s-version-696770 crio[715]: time="2024-01-16 04:00:05.681361157Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=861dbb4b-c585-4f95-8413-3cb4f3d9d959 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 04:00:05 old-k8s-version-696770 crio[715]: time="2024-01-16 04:00:05.681901770Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705377605681885217,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:114472,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=861dbb4b-c585-4f95-8413-3cb4f3d9d959 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 04:00:05 old-k8s-version-696770 crio[715]: time="2024-01-16 04:00:05.683027948Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=fa20ebba-b62c-4a40-b9f8-31cf40541264 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 04:00:05 old-k8s-version-696770 crio[715]: time="2024-01-16 04:00:05.683082622Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=fa20ebba-b62c-4a40-b9f8-31cf40541264 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 04:00:05 old-k8s-version-696770 crio[715]: time="2024-01-16 04:00:05.683300072Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fada0ec84e00786bf019ab1f553ec3d864423c48bb87affed94838adc2503641,PodSandboxId:89368a33b413a031776b03bf7add26e9c79142662e1221fa4cc76f1718d344bb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705376987692443699,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c45ac226-3063-4d53-8a3a-dccca6e8cade,},Annotations:map[string]string{io.kubernetes.container.hash: 823c00b3,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e31b7408ac6d61505c93bd0ee056db05b8065f72033976a79869da0eda891df,PodSandboxId:5d09494a90a0cb05911113aafe4d91d159618e87cab28dee6a6162ef7216a797,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1705376987522398389,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rc8xt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 433f07f2-79e8-48f6-945a-af3dc0060920,},Annotations:map[string]string{io.kubernetes.container.hash: d3f5792e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e647824330362812e08d6745872f94a8bb6a235bdfa053f314b6143f71c061ca,PodSandboxId:89eb0f0d7f5a24ccf7d98b5002c6de23763f95a4495d4f746ecf5e6d6dd831f0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1705376986940017978,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-h85tj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ad3270c-e6c5-4ced-800f-f6e7960097ac,},Annotations:map[string]string{io.kubernetes.container.hash: 538949d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contain
erPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19e1d64306cf954490dc1f2a424f07458029dbafd0c148d61121eb78ffe07f81,PodSandboxId:2fc6ce30e4206ca86317c370088e1505cc72ef648a405aef78dbc31f33d36330,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1705376959826024337,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-696770,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a30650f5c98b0779ac54af241e6784fa,},Annotations:map[st
ring]string{io.kubernetes.container.hash: c56983aa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2348243d3645e74feecf7dbce8cad9836e6b19b215dbd48b3af5ad146519ed8,PodSandboxId:b0ecc1dcc677088cae112b1b2a9d9c4eeb2497163231241d47f262b7492156d1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1705376958167239765,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-696770,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437b
cb4e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ea8576da0bd6430244ffb39b2b3cc282498618fb7d0433d709260bc314ec01e,PodSandboxId:d16982b673cfc28ee712c4e347e21556f5742db17a2c54e531a04b40f063f404,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1705376957992492062,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-696770,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Anno
tations:map[string]string{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d5060e371a626a1d05b1d8a95d487bb85d58e876a3aa66f970430bd665c02b4,PodSandboxId:4e001b6729d354c208b50660b6cff7f6c0731e487a28bf38f458b0d3dcb3e4cb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1705376957324344362,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-696770,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4ce14449bc68e7b0e24764365ae6c5c,},Annotations:map
[string]string{io.kubernetes.container.hash: 476b3afe,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89bde80dffc0fb01a1f5dce6520043ba8f919cd5e304eb1efa323075fa8bf331,PodSandboxId:4e001b6729d354c208b50660b6cff7f6c0731e487a28bf38f458b0d3dcb3e4cb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_EXITED,CreatedAt:1705376649920960106,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-696770,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4ce14449bc68e7b0e24764365ae6c5c,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 476b3afe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=fa20ebba-b62c-4a40-b9f8-31cf40541264 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 04:00:05 old-k8s-version-696770 crio[715]: time="2024-01-16 04:00:05.728112457Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=44755c80-5a08-4756-bb6e-bd3b98994ce3 name=/runtime.v1.RuntimeService/Version
	Jan 16 04:00:05 old-k8s-version-696770 crio[715]: time="2024-01-16 04:00:05.728175154Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=44755c80-5a08-4756-bb6e-bd3b98994ce3 name=/runtime.v1.RuntimeService/Version
	Jan 16 04:00:05 old-k8s-version-696770 crio[715]: time="2024-01-16 04:00:05.730175238Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=136b6f21-0bc6-433c-bd4c-c33d355591ee name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 04:00:05 old-k8s-version-696770 crio[715]: time="2024-01-16 04:00:05.730563581Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705377605730548612,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:114472,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=136b6f21-0bc6-433c-bd4c-c33d355591ee name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 04:00:05 old-k8s-version-696770 crio[715]: time="2024-01-16 04:00:05.731134710Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=3979e8e2-cafa-495e-b008-76492704469d name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 04:00:05 old-k8s-version-696770 crio[715]: time="2024-01-16 04:00:05.731179624Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=3979e8e2-cafa-495e-b008-76492704469d name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 04:00:05 old-k8s-version-696770 crio[715]: time="2024-01-16 04:00:05.732015602Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fada0ec84e00786bf019ab1f553ec3d864423c48bb87affed94838adc2503641,PodSandboxId:89368a33b413a031776b03bf7add26e9c79142662e1221fa4cc76f1718d344bb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705376987692443699,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c45ac226-3063-4d53-8a3a-dccca6e8cade,},Annotations:map[string]string{io.kubernetes.container.hash: 823c00b3,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e31b7408ac6d61505c93bd0ee056db05b8065f72033976a79869da0eda891df,PodSandboxId:5d09494a90a0cb05911113aafe4d91d159618e87cab28dee6a6162ef7216a797,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1705376987522398389,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rc8xt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 433f07f2-79e8-48f6-945a-af3dc0060920,},Annotations:map[string]string{io.kubernetes.container.hash: d3f5792e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e647824330362812e08d6745872f94a8bb6a235bdfa053f314b6143f71c061ca,PodSandboxId:89eb0f0d7f5a24ccf7d98b5002c6de23763f95a4495d4f746ecf5e6d6dd831f0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1705376986940017978,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-h85tj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ad3270c-e6c5-4ced-800f-f6e7960097ac,},Annotations:map[string]string{io.kubernetes.container.hash: 538949d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contain
erPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19e1d64306cf954490dc1f2a424f07458029dbafd0c148d61121eb78ffe07f81,PodSandboxId:2fc6ce30e4206ca86317c370088e1505cc72ef648a405aef78dbc31f33d36330,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1705376959826024337,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-696770,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a30650f5c98b0779ac54af241e6784fa,},Annotations:map[st
ring]string{io.kubernetes.container.hash: c56983aa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2348243d3645e74feecf7dbce8cad9836e6b19b215dbd48b3af5ad146519ed8,PodSandboxId:b0ecc1dcc677088cae112b1b2a9d9c4eeb2497163231241d47f262b7492156d1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1705376958167239765,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-696770,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437b
cb4e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ea8576da0bd6430244ffb39b2b3cc282498618fb7d0433d709260bc314ec01e,PodSandboxId:d16982b673cfc28ee712c4e347e21556f5742db17a2c54e531a04b40f063f404,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1705376957992492062,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-696770,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Anno
tations:map[string]string{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d5060e371a626a1d05b1d8a95d487bb85d58e876a3aa66f970430bd665c02b4,PodSandboxId:4e001b6729d354c208b50660b6cff7f6c0731e487a28bf38f458b0d3dcb3e4cb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1705376957324344362,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-696770,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4ce14449bc68e7b0e24764365ae6c5c,},Annotations:map
[string]string{io.kubernetes.container.hash: 476b3afe,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89bde80dffc0fb01a1f5dce6520043ba8f919cd5e304eb1efa323075fa8bf331,PodSandboxId:4e001b6729d354c208b50660b6cff7f6c0731e487a28bf38f458b0d3dcb3e4cb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_EXITED,CreatedAt:1705376649920960106,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-696770,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4ce14449bc68e7b0e24764365ae6c5c,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 476b3afe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=3979e8e2-cafa-495e-b008-76492704469d name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 04:00:05 old-k8s-version-696770 crio[715]: time="2024-01-16 04:00:05.771276171Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=9437ea97-6a36-427d-9cf2-11e029e41caf name=/runtime.v1.RuntimeService/Version
	Jan 16 04:00:05 old-k8s-version-696770 crio[715]: time="2024-01-16 04:00:05.771360415Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=9437ea97-6a36-427d-9cf2-11e029e41caf name=/runtime.v1.RuntimeService/Version
	Jan 16 04:00:05 old-k8s-version-696770 crio[715]: time="2024-01-16 04:00:05.772682302Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=a14ffcae-6aa0-4fc9-9843-a74b2212e361 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 04:00:05 old-k8s-version-696770 crio[715]: time="2024-01-16 04:00:05.773183801Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705377605773167202,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:114472,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=a14ffcae-6aa0-4fc9-9843-a74b2212e361 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 04:00:05 old-k8s-version-696770 crio[715]: time="2024-01-16 04:00:05.773777425Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=d606b772-79d4-45a5-aca9-82ea666cbff8 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 04:00:05 old-k8s-version-696770 crio[715]: time="2024-01-16 04:00:05.773896469Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=d606b772-79d4-45a5-aca9-82ea666cbff8 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 04:00:05 old-k8s-version-696770 crio[715]: time="2024-01-16 04:00:05.774181702Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fada0ec84e00786bf019ab1f553ec3d864423c48bb87affed94838adc2503641,PodSandboxId:89368a33b413a031776b03bf7add26e9c79142662e1221fa4cc76f1718d344bb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705376987692443699,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c45ac226-3063-4d53-8a3a-dccca6e8cade,},Annotations:map[string]string{io.kubernetes.container.hash: 823c00b3,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e31b7408ac6d61505c93bd0ee056db05b8065f72033976a79869da0eda891df,PodSandboxId:5d09494a90a0cb05911113aafe4d91d159618e87cab28dee6a6162ef7216a797,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1705376987522398389,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rc8xt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 433f07f2-79e8-48f6-945a-af3dc0060920,},Annotations:map[string]string{io.kubernetes.container.hash: d3f5792e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e647824330362812e08d6745872f94a8bb6a235bdfa053f314b6143f71c061ca,PodSandboxId:89eb0f0d7f5a24ccf7d98b5002c6de23763f95a4495d4f746ecf5e6d6dd831f0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1705376986940017978,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-h85tj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ad3270c-e6c5-4ced-800f-f6e7960097ac,},Annotations:map[string]string{io.kubernetes.container.hash: 538949d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contain
erPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19e1d64306cf954490dc1f2a424f07458029dbafd0c148d61121eb78ffe07f81,PodSandboxId:2fc6ce30e4206ca86317c370088e1505cc72ef648a405aef78dbc31f33d36330,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1705376959826024337,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-696770,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a30650f5c98b0779ac54af241e6784fa,},Annotations:map[st
ring]string{io.kubernetes.container.hash: c56983aa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2348243d3645e74feecf7dbce8cad9836e6b19b215dbd48b3af5ad146519ed8,PodSandboxId:b0ecc1dcc677088cae112b1b2a9d9c4eeb2497163231241d47f262b7492156d1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1705376958167239765,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-696770,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437b
cb4e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ea8576da0bd6430244ffb39b2b3cc282498618fb7d0433d709260bc314ec01e,PodSandboxId:d16982b673cfc28ee712c4e347e21556f5742db17a2c54e531a04b40f063f404,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1705376957992492062,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-696770,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Anno
tations:map[string]string{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d5060e371a626a1d05b1d8a95d487bb85d58e876a3aa66f970430bd665c02b4,PodSandboxId:4e001b6729d354c208b50660b6cff7f6c0731e487a28bf38f458b0d3dcb3e4cb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1705376957324344362,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-696770,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4ce14449bc68e7b0e24764365ae6c5c,},Annotations:map
[string]string{io.kubernetes.container.hash: 476b3afe,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89bde80dffc0fb01a1f5dce6520043ba8f919cd5e304eb1efa323075fa8bf331,PodSandboxId:4e001b6729d354c208b50660b6cff7f6c0731e487a28bf38f458b0d3dcb3e4cb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_EXITED,CreatedAt:1705376649920960106,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-696770,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4ce14449bc68e7b0e24764365ae6c5c,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 476b3afe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=d606b772-79d4-45a5-aca9-82ea666cbff8 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	fada0ec84e007       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   10 minutes ago      Running             storage-provisioner       0                   89368a33b413a       storage-provisioner
	8e31b7408ac6d       c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384   10 minutes ago      Running             kube-proxy                0                   5d09494a90a0c       kube-proxy-rc8xt
	e647824330362       bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b   10 minutes ago      Running             coredns                   0                   89eb0f0d7f5a2       coredns-5644d7b6d9-h85tj
	19e1d64306cf9       b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed   10 minutes ago      Running             etcd                      0                   2fc6ce30e4206       etcd-old-k8s-version-696770
	d2348243d3645       06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d   10 minutes ago      Running             kube-controller-manager   0                   b0ecc1dcc6770       kube-controller-manager-old-k8s-version-696770
	6ea8576da0bd6       301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a   10 minutes ago      Running             kube-scheduler            0                   d16982b673cfc       kube-scheduler-old-k8s-version-696770
	1d5060e371a62       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e   10 minutes ago      Running             kube-apiserver            1                   4e001b6729d35       kube-apiserver-old-k8s-version-696770
	89bde80dffc0f       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e   15 minutes ago      Exited              kube-apiserver            0                   4e001b6729d35       kube-apiserver-old-k8s-version-696770
	
	
	==> coredns [e647824330362812e08d6745872f94a8bb6a235bdfa053f314b6143f71c061ca] <==
	.:53
	2024-01-16T03:49:47.339Z [INFO] plugin/reload: Running configuration MD5 = 7bc8613a521eb1a1737fc3e7c0fea3ca
	2024-01-16T03:49:47.339Z [INFO] CoreDNS-1.6.2
	2024-01-16T03:49:47.339Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	2024-01-16T03:49:48.349Z [INFO] 127.0.0.1:43045 - 5720 "HINFO IN 1115856692617163381.8279879992632663013. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009874865s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-696770
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-696770
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e8fa5f64d0e7272be43ff25ed3826261f0a2578
	                    minikube.k8s.io/name=old-k8s-version-696770
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_16T03_49_29_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Jan 2024 03:49:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Jan 2024 03:59:25 +0000   Tue, 16 Jan 2024 03:49:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Jan 2024 03:59:25 +0000   Tue, 16 Jan 2024 03:49:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Jan 2024 03:59:25 +0000   Tue, 16 Jan 2024 03:49:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Jan 2024 03:59:25 +0000   Tue, 16 Jan 2024 03:49:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.167
	  Hostname:    old-k8s-version-696770
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	System Info:
	 Machine ID:                 6aadd6cb8e644a759c807837a966bad8
	 System UUID:                6aadd6cb-8e64-4a75-9c80-7837a966bad8
	 Boot ID:                    179152c5-5431-4fb3-9296-b52c3ea84c5e
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  cri-o://1.24.1
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (8 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                coredns-5644d7b6d9-h85tj                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     10m
	  kube-system                etcd-old-k8s-version-696770                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m43s
	  kube-system                kube-apiserver-old-k8s-version-696770             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m26s
	  kube-system                kube-controller-manager-old-k8s-version-696770    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m18s
	  kube-system                kube-proxy-rc8xt                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                kube-scheduler-old-k8s-version-696770             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m28s
	  kube-system                metrics-server-74d5856cc6-stvzf                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                750m (37%!)(MISSING)   0 (0%!)(MISSING)
	  memory             270Mi (12%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From                                Message
	  ----    ------                   ----               ----                                -------
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet, old-k8s-version-696770     Node old-k8s-version-696770 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet, old-k8s-version-696770     Node old-k8s-version-696770 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)  kubelet, old-k8s-version-696770     Node old-k8s-version-696770 status is now: NodeHasSufficientPID
	  Normal  Starting                 10m                kube-proxy, old-k8s-version-696770  Starting kube-proxy.
	
	
	==> dmesg <==
	[Jan16 03:43] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.069288] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.557314] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.521156] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.158814] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.623626] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.209369] systemd-fstab-generator[640]: Ignoring "noauto" for root device
	[  +0.107918] systemd-fstab-generator[651]: Ignoring "noauto" for root device
	[  +0.155143] systemd-fstab-generator[664]: Ignoring "noauto" for root device
	[  +0.120971] systemd-fstab-generator[675]: Ignoring "noauto" for root device
	[  +0.224849] systemd-fstab-generator[699]: Ignoring "noauto" for root device
	[Jan16 03:44] systemd-fstab-generator[1017]: Ignoring "noauto" for root device
	[  +0.480691] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[ +28.245235] kauditd_printk_skb: 13 callbacks suppressed
	[  +6.064811] kauditd_printk_skb: 2 callbacks suppressed
	[Jan16 03:49] systemd-fstab-generator[3093]: Ignoring "noauto" for root device
	[  +0.769070] kauditd_printk_skb: 6 callbacks suppressed
	[Jan16 03:50] kauditd_printk_skb: 4 callbacks suppressed
	
	
	==> etcd [19e1d64306cf954490dc1f2a424f07458029dbafd0c148d61121eb78ffe07f81] <==
	2024-01-16 03:49:20.019205 I | raft: newRaft ec72a22dc6b2db62 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	2024-01-16 03:49:20.019221 I | raft: ec72a22dc6b2db62 became follower at term 1
	2024-01-16 03:49:20.028570 W | auth: simple token is not cryptographically signed
	2024-01-16 03:49:20.035668 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2024-01-16 03:49:20.036964 I | etcdserver: ec72a22dc6b2db62 as single-node; fast-forwarding 9 ticks (election ticks 10)
	2024-01-16 03:49:20.037511 I | etcdserver/membership: added member ec72a22dc6b2db62 [https://192.168.61.167:2380] to cluster c318a198f49b85fe
	2024-01-16 03:49:20.039619 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2024-01-16 03:49:20.040253 I | embed: listening for metrics on http://192.168.61.167:2381
	2024-01-16 03:49:20.040581 I | embed: listening for metrics on http://127.0.0.1:2381
	2024-01-16 03:49:20.519980 I | raft: ec72a22dc6b2db62 is starting a new election at term 1
	2024-01-16 03:49:20.520033 I | raft: ec72a22dc6b2db62 became candidate at term 2
	2024-01-16 03:49:20.520046 I | raft: ec72a22dc6b2db62 received MsgVoteResp from ec72a22dc6b2db62 at term 2
	2024-01-16 03:49:20.520056 I | raft: ec72a22dc6b2db62 became leader at term 2
	2024-01-16 03:49:20.520060 I | raft: raft.node: ec72a22dc6b2db62 elected leader ec72a22dc6b2db62 at term 2
	2024-01-16 03:49:20.520509 I | etcdserver: setting up the initial cluster version to 3.3
	2024-01-16 03:49:20.522257 N | etcdserver/membership: set the initial cluster version to 3.3
	2024-01-16 03:49:20.522372 I | etcdserver/api: enabled capabilities for version 3.3
	2024-01-16 03:49:20.522448 I | etcdserver: published {Name:old-k8s-version-696770 ClientURLs:[https://192.168.61.167:2379]} to cluster c318a198f49b85fe
	2024-01-16 03:49:20.522546 I | embed: ready to serve client requests
	2024-01-16 03:49:20.522703 I | embed: ready to serve client requests
	2024-01-16 03:49:20.524551 I | embed: serving client requests on 192.168.61.167:2379
	2024-01-16 03:49:20.527728 I | embed: serving client requests on 127.0.0.1:2379
	2024-01-16 03:49:45.990414 W | etcdserver: request "header:<ID:15808352743548917341 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-proxy-rc8xt.17aab7552447a0c4\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-proxy-rc8xt.17aab7552447a0c4\" value_size:428 lease:6584980706694141528 >> failure:<>>" with result "size:16" took too long (401.165768ms) to execute
	2024-01-16 03:59:21.180440 I | mvcc: store.index: compact 645
	2024-01-16 03:59:21.182573 I | mvcc: finished scheduled compaction at 645 (took 1.634604ms)
	
	
	==> kernel <==
	 04:00:06 up 16 min,  0 users,  load average: 0.05, 0.24, 0.22
	Linux old-k8s-version-696770 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [1d5060e371a626a1d05b1d8a95d487bb85d58e876a3aa66f970430bd665c02b4] <==
	I0116 03:52:48.239719       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0116 03:52:48.240114       1 handler_proxy.go:99] no RequestInfo found in the context
	E0116 03:52:48.240215       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0116 03:52:48.240238       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0116 03:54:25.633025       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0116 03:54:25.633165       1 handler_proxy.go:99] no RequestInfo found in the context
	E0116 03:54:25.633280       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0116 03:54:25.633320       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0116 03:55:25.633750       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0116 03:55:25.633949       1 handler_proxy.go:99] no RequestInfo found in the context
	E0116 03:55:25.633990       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0116 03:55:25.633998       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0116 03:57:25.634526       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0116 03:57:25.635107       1 handler_proxy.go:99] no RequestInfo found in the context
	E0116 03:57:25.635300       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0116 03:57:25.635349       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0116 03:59:25.636378       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0116 03:59:25.636769       1 handler_proxy.go:99] no RequestInfo found in the context
	E0116 03:59:25.637071       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0116 03:59:25.637105       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [89bde80dffc0fb01a1f5dce6520043ba8f919cd5e304eb1efa323075fa8bf331] <==
	W0116 03:49:14.029642       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:49:14.029661       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:49:14.029679       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:49:14.029695       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:49:14.029898       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:49:14.029980       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:49:14.030002       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:49:14.030018       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:49:14.030040       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:49:14.030060       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:49:14.030094       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:49:14.030627       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:49:14.030659       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:49:14.030686       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:49:14.030714       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:49:14.030740       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:49:14.030766       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:49:14.030873       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:49:14.030892       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:49:14.030914       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:49:14.030913       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:49:15.307770       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:49:15.325136       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:49:15.328633       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:49:15.330117       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	
	
	==> kube-controller-manager [d2348243d3645e74feecf7dbce8cad9836e6b19b215dbd48b3af5ad146519ed8] <==
	E0116 03:53:47.116637       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0116 03:54:01.127345       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0116 03:54:17.369128       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0116 03:54:33.129717       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0116 03:54:47.621896       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0116 03:55:05.132285       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0116 03:55:17.874151       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0116 03:55:37.135215       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0116 03:55:48.126264       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0116 03:56:09.138355       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0116 03:56:18.378494       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0116 03:56:41.141191       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0116 03:56:48.630610       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0116 03:57:13.143645       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0116 03:57:18.882699       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0116 03:57:45.145647       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0116 03:57:49.135284       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0116 03:58:17.148135       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0116 03:58:19.387534       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0116 03:58:49.150693       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0116 03:58:49.639605       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E0116 03:59:19.891668       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0116 03:59:21.153160       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0116 03:59:50.143506       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0116 03:59:53.155482       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	
	==> kube-proxy [8e31b7408ac6d61505c93bd0ee056db05b8065f72033976a79869da0eda891df] <==
	W0116 03:49:48.047713       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I0116 03:49:48.071964       1 node.go:135] Successfully retrieved node IP: 192.168.61.167
	I0116 03:49:48.072113       1 server_others.go:149] Using iptables Proxier.
	I0116 03:49:48.074442       1 server.go:529] Version: v1.16.0
	I0116 03:49:48.076547       1 config.go:313] Starting service config controller
	I0116 03:49:48.076624       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0116 03:49:48.078089       1 config.go:131] Starting endpoints config controller
	I0116 03:49:48.078146       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0116 03:49:48.178325       1 shared_informer.go:204] Caches are synced for service config 
	I0116 03:49:48.183518       1 shared_informer.go:204] Caches are synced for endpoints config 
	
	
	==> kube-scheduler [6ea8576da0bd6430244ffb39b2b3cc282498618fb7d0433d709260bc314ec01e] <==
	W0116 03:49:24.645969       1 authentication.go:79] Authentication is disabled
	I0116 03:49:24.645980       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I0116 03:49:24.646348       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E0116 03:49:24.678950       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0116 03:49:24.768757       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0116 03:49:24.782388       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0116 03:49:24.782966       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0116 03:49:24.784345       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0116 03:49:24.787511       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0116 03:49:24.787750       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0116 03:49:24.793275       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0116 03:49:24.793424       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0116 03:49:24.793491       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0116 03:49:24.795267       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0116 03:49:25.681908       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0116 03:49:25.771303       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0116 03:49:25.785399       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0116 03:49:25.799176       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0116 03:49:25.799429       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0116 03:49:25.800003       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0116 03:49:25.800389       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0116 03:49:25.800740       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0116 03:49:25.804682       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0116 03:49:25.806926       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0116 03:49:25.808214       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-01-16 03:43:37 UTC, ends at Tue 2024-01-16 04:00:06 UTC. --
	Jan 16 03:55:39 old-k8s-version-696770 kubelet[3099]: E0116 03:55:39.695107    3099 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 16 03:55:39 old-k8s-version-696770 kubelet[3099]: E0116 03:55:39.695212    3099 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 16 03:55:39 old-k8s-version-696770 kubelet[3099]: E0116 03:55:39.695272    3099 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 16 03:55:39 old-k8s-version-696770 kubelet[3099]: E0116 03:55:39.695303    3099 pod_workers.go:191] Error syncing pod 92ed6941-1071-4757-9279-144187442f64 ("metrics-server-74d5856cc6-stvzf_kube-system(92ed6941-1071-4757-9279-144187442f64)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Jan 16 03:55:54 old-k8s-version-696770 kubelet[3099]: E0116 03:55:54.686025    3099 pod_workers.go:191] Error syncing pod 92ed6941-1071-4757-9279-144187442f64 ("metrics-server-74d5856cc6-stvzf_kube-system(92ed6941-1071-4757-9279-144187442f64)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 16 03:56:09 old-k8s-version-696770 kubelet[3099]: E0116 03:56:09.684049    3099 pod_workers.go:191] Error syncing pod 92ed6941-1071-4757-9279-144187442f64 ("metrics-server-74d5856cc6-stvzf_kube-system(92ed6941-1071-4757-9279-144187442f64)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 16 03:56:21 old-k8s-version-696770 kubelet[3099]: E0116 03:56:21.683637    3099 pod_workers.go:191] Error syncing pod 92ed6941-1071-4757-9279-144187442f64 ("metrics-server-74d5856cc6-stvzf_kube-system(92ed6941-1071-4757-9279-144187442f64)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 16 03:56:35 old-k8s-version-696770 kubelet[3099]: E0116 03:56:35.686951    3099 pod_workers.go:191] Error syncing pod 92ed6941-1071-4757-9279-144187442f64 ("metrics-server-74d5856cc6-stvzf_kube-system(92ed6941-1071-4757-9279-144187442f64)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 16 03:56:47 old-k8s-version-696770 kubelet[3099]: E0116 03:56:47.683763    3099 pod_workers.go:191] Error syncing pod 92ed6941-1071-4757-9279-144187442f64 ("metrics-server-74d5856cc6-stvzf_kube-system(92ed6941-1071-4757-9279-144187442f64)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 16 03:57:00 old-k8s-version-696770 kubelet[3099]: E0116 03:57:00.685069    3099 pod_workers.go:191] Error syncing pod 92ed6941-1071-4757-9279-144187442f64 ("metrics-server-74d5856cc6-stvzf_kube-system(92ed6941-1071-4757-9279-144187442f64)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 16 03:57:12 old-k8s-version-696770 kubelet[3099]: E0116 03:57:12.683597    3099 pod_workers.go:191] Error syncing pod 92ed6941-1071-4757-9279-144187442f64 ("metrics-server-74d5856cc6-stvzf_kube-system(92ed6941-1071-4757-9279-144187442f64)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 16 03:57:24 old-k8s-version-696770 kubelet[3099]: E0116 03:57:24.683741    3099 pod_workers.go:191] Error syncing pod 92ed6941-1071-4757-9279-144187442f64 ("metrics-server-74d5856cc6-stvzf_kube-system(92ed6941-1071-4757-9279-144187442f64)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 16 03:57:35 old-k8s-version-696770 kubelet[3099]: E0116 03:57:35.683728    3099 pod_workers.go:191] Error syncing pod 92ed6941-1071-4757-9279-144187442f64 ("metrics-server-74d5856cc6-stvzf_kube-system(92ed6941-1071-4757-9279-144187442f64)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 16 03:57:50 old-k8s-version-696770 kubelet[3099]: E0116 03:57:50.684175    3099 pod_workers.go:191] Error syncing pod 92ed6941-1071-4757-9279-144187442f64 ("metrics-server-74d5856cc6-stvzf_kube-system(92ed6941-1071-4757-9279-144187442f64)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 16 03:58:05 old-k8s-version-696770 kubelet[3099]: E0116 03:58:05.683858    3099 pod_workers.go:191] Error syncing pod 92ed6941-1071-4757-9279-144187442f64 ("metrics-server-74d5856cc6-stvzf_kube-system(92ed6941-1071-4757-9279-144187442f64)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 16 03:58:20 old-k8s-version-696770 kubelet[3099]: E0116 03:58:20.683407    3099 pod_workers.go:191] Error syncing pod 92ed6941-1071-4757-9279-144187442f64 ("metrics-server-74d5856cc6-stvzf_kube-system(92ed6941-1071-4757-9279-144187442f64)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 16 03:58:33 old-k8s-version-696770 kubelet[3099]: E0116 03:58:33.683765    3099 pod_workers.go:191] Error syncing pod 92ed6941-1071-4757-9279-144187442f64 ("metrics-server-74d5856cc6-stvzf_kube-system(92ed6941-1071-4757-9279-144187442f64)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 16 03:58:46 old-k8s-version-696770 kubelet[3099]: E0116 03:58:46.685683    3099 pod_workers.go:191] Error syncing pod 92ed6941-1071-4757-9279-144187442f64 ("metrics-server-74d5856cc6-stvzf_kube-system(92ed6941-1071-4757-9279-144187442f64)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 16 03:58:58 old-k8s-version-696770 kubelet[3099]: E0116 03:58:58.684606    3099 pod_workers.go:191] Error syncing pod 92ed6941-1071-4757-9279-144187442f64 ("metrics-server-74d5856cc6-stvzf_kube-system(92ed6941-1071-4757-9279-144187442f64)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 16 03:59:10 old-k8s-version-696770 kubelet[3099]: E0116 03:59:10.683736    3099 pod_workers.go:191] Error syncing pod 92ed6941-1071-4757-9279-144187442f64 ("metrics-server-74d5856cc6-stvzf_kube-system(92ed6941-1071-4757-9279-144187442f64)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 16 03:59:16 old-k8s-version-696770 kubelet[3099]: E0116 03:59:16.813146    3099 container_manager_linux.go:510] failed to find cgroups of kubelet - cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service
	Jan 16 03:59:25 old-k8s-version-696770 kubelet[3099]: E0116 03:59:25.683682    3099 pod_workers.go:191] Error syncing pod 92ed6941-1071-4757-9279-144187442f64 ("metrics-server-74d5856cc6-stvzf_kube-system(92ed6941-1071-4757-9279-144187442f64)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 16 03:59:38 old-k8s-version-696770 kubelet[3099]: E0116 03:59:38.683565    3099 pod_workers.go:191] Error syncing pod 92ed6941-1071-4757-9279-144187442f64 ("metrics-server-74d5856cc6-stvzf_kube-system(92ed6941-1071-4757-9279-144187442f64)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 16 03:59:51 old-k8s-version-696770 kubelet[3099]: E0116 03:59:51.683635    3099 pod_workers.go:191] Error syncing pod 92ed6941-1071-4757-9279-144187442f64 ("metrics-server-74d5856cc6-stvzf_kube-system(92ed6941-1071-4757-9279-144187442f64)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 16 04:00:02 old-k8s-version-696770 kubelet[3099]: E0116 04:00:02.684335    3099 pod_workers.go:191] Error syncing pod 92ed6941-1071-4757-9279-144187442f64 ("metrics-server-74d5856cc6-stvzf_kube-system(92ed6941-1071-4757-9279-144187442f64)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	
	==> storage-provisioner [fada0ec84e00786bf019ab1f553ec3d864423c48bb87affed94838adc2503641] <==
	I0116 03:49:48.018162       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0116 03:49:48.032416       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0116 03:49:48.032755       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0116 03:49:48.051910       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0116 03:49:48.052397       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-696770_13756098-faf2-41ee-ad13-f44428773837!
	I0116 03:49:48.060292       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8f74b8f6-0dff-418f-9281-00d4c0973e04", APIVersion:"v1", ResourceVersion:"399", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-696770_13756098-faf2-41ee-ad13-f44428773837 became leader
	I0116 03:49:48.153487       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-696770_13756098-faf2-41ee-ad13-f44428773837!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-696770 -n old-k8s-version-696770
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-696770 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5856cc6-stvzf
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-696770 describe pod metrics-server-74d5856cc6-stvzf
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-696770 describe pod metrics-server-74d5856cc6-stvzf: exit status 1 (81.28476ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5856cc6-stvzf" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-696770 describe pod metrics-server-74d5856cc6-stvzf: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.48s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (372.96s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-666547 -n no-preload-666547
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-01-16 04:03:46.806403552 +0000 UTC m=+5370.744626168
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-666547 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-666547 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.278µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-666547 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-666547 -n no-preload-666547
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-666547 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-666547 logs -n 25: (1.489979491s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cert-options-977008 -- sudo                         | cert-options-977008          | jenkins | v1.32.0 | 16 Jan 24 03:34 UTC | 16 Jan 24 03:34 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-977008                                 | cert-options-977008          | jenkins | v1.32.0 | 16 Jan 24 03:34 UTC | 16 Jan 24 03:34 UTC |
	| start   | -p embed-certs-615980                                  | embed-certs-615980           | jenkins | v1.32.0 | 16 Jan 24 03:34 UTC | 16 Jan 24 03:35 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-690771                              | cert-expiration-690771       | jenkins | v1.32.0 | 16 Jan 24 03:35 UTC | 16 Jan 24 03:36 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-615980            | embed-certs-615980           | jenkins | v1.32.0 | 16 Jan 24 03:35 UTC | 16 Jan 24 03:35 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-615980                                  | embed-certs-615980           | jenkins | v1.32.0 | 16 Jan 24 03:35 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-666547             | no-preload-666547            | jenkins | v1.32.0 | 16 Jan 24 03:35 UTC | 16 Jan 24 03:35 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-666547                                   | no-preload-666547            | jenkins | v1.32.0 | 16 Jan 24 03:35 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-696770        | old-k8s-version-696770       | jenkins | v1.32.0 | 16 Jan 24 03:36 UTC | 16 Jan 24 03:36 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-696770                              | old-k8s-version-696770       | jenkins | v1.32.0 | 16 Jan 24 03:36 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-690771                              | cert-expiration-690771       | jenkins | v1.32.0 | 16 Jan 24 03:36 UTC | 16 Jan 24 03:36 UTC |
	| delete  | -p                                                     | disable-driver-mounts-673948 | jenkins | v1.32.0 | 16 Jan 24 03:36 UTC | 16 Jan 24 03:36 UTC |
	|         | disable-driver-mounts-673948                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-434445 | jenkins | v1.32.0 | 16 Jan 24 03:36 UTC | 16 Jan 24 03:37 UTC |
	|         | default-k8s-diff-port-434445                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-434445  | default-k8s-diff-port-434445 | jenkins | v1.32.0 | 16 Jan 24 03:37 UTC | 16 Jan 24 03:37 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-434445 | jenkins | v1.32.0 | 16 Jan 24 03:37 UTC |                     |
	|         | default-k8s-diff-port-434445                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-615980                 | embed-certs-615980           | jenkins | v1.32.0 | 16 Jan 24 03:38 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-666547                  | no-preload-666547            | jenkins | v1.32.0 | 16 Jan 24 03:38 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-615980                                  | embed-certs-615980           | jenkins | v1.32.0 | 16 Jan 24 03:38 UTC | 16 Jan 24 03:49 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p no-preload-666547                                   | no-preload-666547            | jenkins | v1.32.0 | 16 Jan 24 03:38 UTC | 16 Jan 24 03:48 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-696770             | old-k8s-version-696770       | jenkins | v1.32.0 | 16 Jan 24 03:38 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-696770                              | old-k8s-version-696770       | jenkins | v1.32.0 | 16 Jan 24 03:38 UTC | 16 Jan 24 03:51 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-434445       | default-k8s-diff-port-434445 | jenkins | v1.32.0 | 16 Jan 24 03:40 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-434445 | jenkins | v1.32.0 | 16 Jan 24 03:40 UTC | 16 Jan 24 03:49 UTC |
	|         | default-k8s-diff-port-434445                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-696770                              | old-k8s-version-696770       | jenkins | v1.32.0 | 16 Jan 24 04:03 UTC | 16 Jan 24 04:03 UTC |
	| start   | -p newest-cni-889166 --memory=2200 --alsologtostderr   | newest-cni-889166            | jenkins | v1.32.0 | 16 Jan 24 04:03 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/16 04:03:34
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0116 04:03:34.036778  512568 out.go:296] Setting OutFile to fd 1 ...
	I0116 04:03:34.036934  512568 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 04:03:34.036945  512568 out.go:309] Setting ErrFile to fd 2...
	I0116 04:03:34.036950  512568 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 04:03:34.037170  512568 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17965-468241/.minikube/bin
	I0116 04:03:34.038121  512568 out.go:303] Setting JSON to false
	I0116 04:03:34.039526  512568 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":17166,"bootTime":1705360648,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0116 04:03:34.039618  512568 start.go:138] virtualization: kvm guest
	I0116 04:03:34.042762  512568 out.go:177] * [newest-cni-889166] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0116 04:03:34.044904  512568 out.go:177]   - MINIKUBE_LOCATION=17965
	I0116 04:03:34.044879  512568 notify.go:220] Checking for updates...
	I0116 04:03:34.046716  512568 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 04:03:34.048407  512568 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17965-468241/kubeconfig
	I0116 04:03:34.050182  512568 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17965-468241/.minikube
	I0116 04:03:34.051911  512568 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0116 04:03:34.053424  512568 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0116 04:03:34.055380  512568 config.go:182] Loaded profile config "default-k8s-diff-port-434445": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 04:03:34.055517  512568 config.go:182] Loaded profile config "embed-certs-615980": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 04:03:34.055680  512568 config.go:182] Loaded profile config "no-preload-666547": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0116 04:03:34.055807  512568 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 04:03:34.097910  512568 out.go:177] * Using the kvm2 driver based on user configuration
	I0116 04:03:34.099572  512568 start.go:298] selected driver: kvm2
	I0116 04:03:34.099596  512568 start.go:902] validating driver "kvm2" against <nil>
	I0116 04:03:34.099612  512568 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0116 04:03:34.100558  512568 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 04:03:34.100669  512568 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17965-468241/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0116 04:03:34.117563  512568 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0116 04:03:34.117651  512568 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	W0116 04:03:34.117688  512568 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0116 04:03:34.118087  512568 start_flags.go:946] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0116 04:03:34.118202  512568 cni.go:84] Creating CNI manager for ""
	I0116 04:03:34.118225  512568 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 04:03:34.118250  512568 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0116 04:03:34.118269  512568 start_flags.go:321] config:
	{Name:newest-cni-889166 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-889166 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 04:03:34.118501  512568 iso.go:125] acquiring lock: {Name:mk2f3231f0eeeb23816ac363851489181398d1c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 04:03:34.120787  512568 out.go:177] * Starting control plane node newest-cni-889166 in cluster newest-cni-889166
	I0116 04:03:34.122233  512568 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0116 04:03:34.122295  512568 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0116 04:03:34.122328  512568 cache.go:56] Caching tarball of preloaded images
	I0116 04:03:34.122448  512568 preload.go:174] Found /home/jenkins/minikube-integration/17965-468241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0116 04:03:34.122465  512568 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on crio
	I0116 04:03:34.122674  512568 profile.go:148] Saving config to /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/newest-cni-889166/config.json ...
	I0116 04:03:34.122720  512568 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/newest-cni-889166/config.json: {Name:mkcc7239d99c88974397f82982e24f492019bde3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 04:03:34.122952  512568 start.go:365] acquiring machines lock for newest-cni-889166: {Name:mk901e5fae8c1a578d989b520053c709bdbdcf06 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0116 04:03:34.122993  512568 start.go:369] acquired machines lock for "newest-cni-889166" in 22.482µs
	I0116 04:03:34.123021  512568 start.go:93] Provisioning new machine with config: &{Name:newest-cni-889166 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-889166 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 04:03:34.123100  512568 start.go:125] createHost starting for "" (driver="kvm2")
	I0116 04:03:34.125422  512568 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0116 04:03:34.125665  512568 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 04:03:34.125722  512568 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 04:03:34.144129  512568 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43171
	I0116 04:03:34.144759  512568 main.go:141] libmachine: () Calling .GetVersion
	I0116 04:03:34.145384  512568 main.go:141] libmachine: Using API Version  1
	I0116 04:03:34.145413  512568 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 04:03:34.145868  512568 main.go:141] libmachine: () Calling .GetMachineName
	I0116 04:03:34.146119  512568 main.go:141] libmachine: (newest-cni-889166) Calling .GetMachineName
	I0116 04:03:34.146299  512568 main.go:141] libmachine: (newest-cni-889166) Calling .DriverName
	I0116 04:03:34.146487  512568 start.go:159] libmachine.API.Create for "newest-cni-889166" (driver="kvm2")
	I0116 04:03:34.146564  512568 client.go:168] LocalClient.Create starting
	I0116 04:03:34.146606  512568 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem
	I0116 04:03:34.146659  512568 main.go:141] libmachine: Decoding PEM data...
	I0116 04:03:34.146679  512568 main.go:141] libmachine: Parsing certificate...
	I0116 04:03:34.146756  512568 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem
	I0116 04:03:34.146777  512568 main.go:141] libmachine: Decoding PEM data...
	I0116 04:03:34.146788  512568 main.go:141] libmachine: Parsing certificate...
	I0116 04:03:34.146805  512568 main.go:141] libmachine: Running pre-create checks...
	I0116 04:03:34.146819  512568 main.go:141] libmachine: (newest-cni-889166) Calling .PreCreateCheck
	I0116 04:03:34.147425  512568 main.go:141] libmachine: (newest-cni-889166) Calling .GetConfigRaw
	I0116 04:03:34.148084  512568 main.go:141] libmachine: Creating machine...
	I0116 04:03:34.148107  512568 main.go:141] libmachine: (newest-cni-889166) Calling .Create
	I0116 04:03:34.148286  512568 main.go:141] libmachine: (newest-cni-889166) Creating KVM machine...
	I0116 04:03:34.149816  512568 main.go:141] libmachine: (newest-cni-889166) DBG | found existing default KVM network
	I0116 04:03:34.151351  512568 main.go:141] libmachine: (newest-cni-889166) DBG | I0116 04:03:34.151148  512590 network.go:214] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:fe:ff:80} reservation:<nil>}
	I0116 04:03:34.152316  512568 main.go:141] libmachine: (newest-cni-889166) DBG | I0116 04:03:34.152210  512590 network.go:214] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:44:aa:d8} reservation:<nil>}
	I0116 04:03:34.153711  512568 main.go:141] libmachine: (newest-cni-889166) DBG | I0116 04:03:34.153621  512590 network.go:209] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000283230}
	I0116 04:03:34.160712  512568 main.go:141] libmachine: (newest-cni-889166) DBG | trying to create private KVM network mk-newest-cni-889166 192.168.61.0/24...
	I0116 04:03:34.251508  512568 main.go:141] libmachine: (newest-cni-889166) DBG | private KVM network mk-newest-cni-889166 192.168.61.0/24 created
	I0116 04:03:34.251553  512568 main.go:141] libmachine: (newest-cni-889166) Setting up store path in /home/jenkins/minikube-integration/17965-468241/.minikube/machines/newest-cni-889166 ...
	I0116 04:03:34.251603  512568 main.go:141] libmachine: (newest-cni-889166) DBG | I0116 04:03:34.251445  512590 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17965-468241/.minikube
	I0116 04:03:34.251630  512568 main.go:141] libmachine: (newest-cni-889166) Building disk image from file:///home/jenkins/minikube-integration/17965-468241/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso
	I0116 04:03:34.251750  512568 main.go:141] libmachine: (newest-cni-889166) Downloading /home/jenkins/minikube-integration/17965-468241/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17965-468241/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso...
	I0116 04:03:34.492494  512568 main.go:141] libmachine: (newest-cni-889166) DBG | I0116 04:03:34.492361  512590 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17965-468241/.minikube/machines/newest-cni-889166/id_rsa...
	I0116 04:03:34.633147  512568 main.go:141] libmachine: (newest-cni-889166) DBG | I0116 04:03:34.632981  512590 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17965-468241/.minikube/machines/newest-cni-889166/newest-cni-889166.rawdisk...
	I0116 04:03:34.633203  512568 main.go:141] libmachine: (newest-cni-889166) DBG | Writing magic tar header
	I0116 04:03:34.633226  512568 main.go:141] libmachine: (newest-cni-889166) DBG | Writing SSH key tar header
	I0116 04:03:34.633239  512568 main.go:141] libmachine: (newest-cni-889166) DBG | I0116 04:03:34.633186  512590 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17965-468241/.minikube/machines/newest-cni-889166 ...
	I0116 04:03:34.633493  512568 main.go:141] libmachine: (newest-cni-889166) Setting executable bit set on /home/jenkins/minikube-integration/17965-468241/.minikube/machines/newest-cni-889166 (perms=drwx------)
	I0116 04:03:34.633533  512568 main.go:141] libmachine: (newest-cni-889166) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17965-468241/.minikube/machines/newest-cni-889166
	I0116 04:03:34.633548  512568 main.go:141] libmachine: (newest-cni-889166) Setting executable bit set on /home/jenkins/minikube-integration/17965-468241/.minikube/machines (perms=drwxr-xr-x)
	I0116 04:03:34.633563  512568 main.go:141] libmachine: (newest-cni-889166) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17965-468241/.minikube/machines
	I0116 04:03:34.633582  512568 main.go:141] libmachine: (newest-cni-889166) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17965-468241/.minikube
	I0116 04:03:34.633597  512568 main.go:141] libmachine: (newest-cni-889166) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17965-468241
	I0116 04:03:34.633613  512568 main.go:141] libmachine: (newest-cni-889166) Setting executable bit set on /home/jenkins/minikube-integration/17965-468241/.minikube (perms=drwxr-xr-x)
	I0116 04:03:34.633635  512568 main.go:141] libmachine: (newest-cni-889166) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0116 04:03:34.633649  512568 main.go:141] libmachine: (newest-cni-889166) Setting executable bit set on /home/jenkins/minikube-integration/17965-468241 (perms=drwxrwxr-x)
	I0116 04:03:34.633668  512568 main.go:141] libmachine: (newest-cni-889166) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0116 04:03:34.633683  512568 main.go:141] libmachine: (newest-cni-889166) DBG | Checking permissions on dir: /home/jenkins
	I0116 04:03:34.633695  512568 main.go:141] libmachine: (newest-cni-889166) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0116 04:03:34.633710  512568 main.go:141] libmachine: (newest-cni-889166) Creating domain...
	I0116 04:03:34.633728  512568 main.go:141] libmachine: (newest-cni-889166) DBG | Checking permissions on dir: /home
	I0116 04:03:34.633748  512568 main.go:141] libmachine: (newest-cni-889166) DBG | Skipping /home - not owner
	I0116 04:03:34.634951  512568 main.go:141] libmachine: (newest-cni-889166) define libvirt domain using xml: 
	I0116 04:03:34.634974  512568 main.go:141] libmachine: (newest-cni-889166) <domain type='kvm'>
	I0116 04:03:34.634982  512568 main.go:141] libmachine: (newest-cni-889166)   <name>newest-cni-889166</name>
	I0116 04:03:34.634992  512568 main.go:141] libmachine: (newest-cni-889166)   <memory unit='MiB'>2200</memory>
	I0116 04:03:34.634998  512568 main.go:141] libmachine: (newest-cni-889166)   <vcpu>2</vcpu>
	I0116 04:03:34.635003  512568 main.go:141] libmachine: (newest-cni-889166)   <features>
	I0116 04:03:34.635009  512568 main.go:141] libmachine: (newest-cni-889166)     <acpi/>
	I0116 04:03:34.635014  512568 main.go:141] libmachine: (newest-cni-889166)     <apic/>
	I0116 04:03:34.635020  512568 main.go:141] libmachine: (newest-cni-889166)     <pae/>
	I0116 04:03:34.635025  512568 main.go:141] libmachine: (newest-cni-889166)     
	I0116 04:03:34.635042  512568 main.go:141] libmachine: (newest-cni-889166)   </features>
	I0116 04:03:34.635051  512568 main.go:141] libmachine: (newest-cni-889166)   <cpu mode='host-passthrough'>
	I0116 04:03:34.635057  512568 main.go:141] libmachine: (newest-cni-889166)   
	I0116 04:03:34.635064  512568 main.go:141] libmachine: (newest-cni-889166)   </cpu>
	I0116 04:03:34.635090  512568 main.go:141] libmachine: (newest-cni-889166)   <os>
	I0116 04:03:34.635117  512568 main.go:141] libmachine: (newest-cni-889166)     <type>hvm</type>
	I0116 04:03:34.635125  512568 main.go:141] libmachine: (newest-cni-889166)     <boot dev='cdrom'/>
	I0116 04:03:34.635131  512568 main.go:141] libmachine: (newest-cni-889166)     <boot dev='hd'/>
	I0116 04:03:34.635138  512568 main.go:141] libmachine: (newest-cni-889166)     <bootmenu enable='no'/>
	I0116 04:03:34.635145  512568 main.go:141] libmachine: (newest-cni-889166)   </os>
	I0116 04:03:34.635151  512568 main.go:141] libmachine: (newest-cni-889166)   <devices>
	I0116 04:03:34.635160  512568 main.go:141] libmachine: (newest-cni-889166)     <disk type='file' device='cdrom'>
	I0116 04:03:34.635169  512568 main.go:141] libmachine: (newest-cni-889166)       <source file='/home/jenkins/minikube-integration/17965-468241/.minikube/machines/newest-cni-889166/boot2docker.iso'/>
	I0116 04:03:34.635181  512568 main.go:141] libmachine: (newest-cni-889166)       <target dev='hdc' bus='scsi'/>
	I0116 04:03:34.635188  512568 main.go:141] libmachine: (newest-cni-889166)       <readonly/>
	I0116 04:03:34.635194  512568 main.go:141] libmachine: (newest-cni-889166)     </disk>
	I0116 04:03:34.635203  512568 main.go:141] libmachine: (newest-cni-889166)     <disk type='file' device='disk'>
	I0116 04:03:34.635210  512568 main.go:141] libmachine: (newest-cni-889166)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0116 04:03:34.635221  512568 main.go:141] libmachine: (newest-cni-889166)       <source file='/home/jenkins/minikube-integration/17965-468241/.minikube/machines/newest-cni-889166/newest-cni-889166.rawdisk'/>
	I0116 04:03:34.635230  512568 main.go:141] libmachine: (newest-cni-889166)       <target dev='hda' bus='virtio'/>
	I0116 04:03:34.635236  512568 main.go:141] libmachine: (newest-cni-889166)     </disk>
	I0116 04:03:34.635255  512568 main.go:141] libmachine: (newest-cni-889166)     <interface type='network'>
	I0116 04:03:34.635286  512568 main.go:141] libmachine: (newest-cni-889166)       <source network='mk-newest-cni-889166'/>
	I0116 04:03:34.635326  512568 main.go:141] libmachine: (newest-cni-889166)       <model type='virtio'/>
	I0116 04:03:34.635342  512568 main.go:141] libmachine: (newest-cni-889166)     </interface>
	I0116 04:03:34.635357  512568 main.go:141] libmachine: (newest-cni-889166)     <interface type='network'>
	I0116 04:03:34.635372  512568 main.go:141] libmachine: (newest-cni-889166)       <source network='default'/>
	I0116 04:03:34.635386  512568 main.go:141] libmachine: (newest-cni-889166)       <model type='virtio'/>
	I0116 04:03:34.635400  512568 main.go:141] libmachine: (newest-cni-889166)     </interface>
	I0116 04:03:34.635427  512568 main.go:141] libmachine: (newest-cni-889166)     <serial type='pty'>
	I0116 04:03:34.635463  512568 main.go:141] libmachine: (newest-cni-889166)       <target port='0'/>
	I0116 04:03:34.635479  512568 main.go:141] libmachine: (newest-cni-889166)     </serial>
	I0116 04:03:34.635494  512568 main.go:141] libmachine: (newest-cni-889166)     <console type='pty'>
	I0116 04:03:34.635507  512568 main.go:141] libmachine: (newest-cni-889166)       <target type='serial' port='0'/>
	I0116 04:03:34.635521  512568 main.go:141] libmachine: (newest-cni-889166)     </console>
	I0116 04:03:34.635533  512568 main.go:141] libmachine: (newest-cni-889166)     <rng model='virtio'>
	I0116 04:03:34.635559  512568 main.go:141] libmachine: (newest-cni-889166)       <backend model='random'>/dev/random</backend>
	I0116 04:03:34.635575  512568 main.go:141] libmachine: (newest-cni-889166)     </rng>
	I0116 04:03:34.635589  512568 main.go:141] libmachine: (newest-cni-889166)     
	I0116 04:03:34.635601  512568 main.go:141] libmachine: (newest-cni-889166)     
	I0116 04:03:34.635615  512568 main.go:141] libmachine: (newest-cni-889166)   </devices>
	I0116 04:03:34.635626  512568 main.go:141] libmachine: (newest-cni-889166) </domain>
	I0116 04:03:34.635639  512568 main.go:141] libmachine: (newest-cni-889166) 
	I0116 04:03:34.640429  512568 main.go:141] libmachine: (newest-cni-889166) DBG | domain newest-cni-889166 has defined MAC address 52:54:00:e2:e7:3d in network default
	I0116 04:03:34.641114  512568 main.go:141] libmachine: (newest-cni-889166) DBG | domain newest-cni-889166 has defined MAC address 52:54:00:0d:88:aa in network mk-newest-cni-889166
	I0116 04:03:34.641140  512568 main.go:141] libmachine: (newest-cni-889166) Ensuring networks are active...
	I0116 04:03:34.641886  512568 main.go:141] libmachine: (newest-cni-889166) Ensuring network default is active
	I0116 04:03:34.642266  512568 main.go:141] libmachine: (newest-cni-889166) Ensuring network mk-newest-cni-889166 is active
	I0116 04:03:34.642836  512568 main.go:141] libmachine: (newest-cni-889166) Getting domain xml...
	I0116 04:03:34.643840  512568 main.go:141] libmachine: (newest-cni-889166) Creating domain...
	I0116 04:03:35.005660  512568 main.go:141] libmachine: (newest-cni-889166) Waiting to get IP...
	I0116 04:03:35.006592  512568 main.go:141] libmachine: (newest-cni-889166) DBG | domain newest-cni-889166 has defined MAC address 52:54:00:0d:88:aa in network mk-newest-cni-889166
	I0116 04:03:35.007171  512568 main.go:141] libmachine: (newest-cni-889166) DBG | unable to find current IP address of domain newest-cni-889166 in network mk-newest-cni-889166
	I0116 04:03:35.007287  512568 main.go:141] libmachine: (newest-cni-889166) DBG | I0116 04:03:35.007209  512590 retry.go:31] will retry after 255.739563ms: waiting for machine to come up
	I0116 04:03:35.264802  512568 main.go:141] libmachine: (newest-cni-889166) DBG | domain newest-cni-889166 has defined MAC address 52:54:00:0d:88:aa in network mk-newest-cni-889166
	I0116 04:03:35.265370  512568 main.go:141] libmachine: (newest-cni-889166) DBG | unable to find current IP address of domain newest-cni-889166 in network mk-newest-cni-889166
	I0116 04:03:35.265399  512568 main.go:141] libmachine: (newest-cni-889166) DBG | I0116 04:03:35.265317  512590 retry.go:31] will retry after 386.528087ms: waiting for machine to come up
	I0116 04:03:35.654023  512568 main.go:141] libmachine: (newest-cni-889166) DBG | domain newest-cni-889166 has defined MAC address 52:54:00:0d:88:aa in network mk-newest-cni-889166
	I0116 04:03:35.654648  512568 main.go:141] libmachine: (newest-cni-889166) DBG | unable to find current IP address of domain newest-cni-889166 in network mk-newest-cni-889166
	I0116 04:03:35.654673  512568 main.go:141] libmachine: (newest-cni-889166) DBG | I0116 04:03:35.654587  512590 retry.go:31] will retry after 368.305524ms: waiting for machine to come up
	I0116 04:03:36.024259  512568 main.go:141] libmachine: (newest-cni-889166) DBG | domain newest-cni-889166 has defined MAC address 52:54:00:0d:88:aa in network mk-newest-cni-889166
	I0116 04:03:36.024759  512568 main.go:141] libmachine: (newest-cni-889166) DBG | unable to find current IP address of domain newest-cni-889166 in network mk-newest-cni-889166
	I0116 04:03:36.024788  512568 main.go:141] libmachine: (newest-cni-889166) DBG | I0116 04:03:36.024715  512590 retry.go:31] will retry after 535.920371ms: waiting for machine to come up
	I0116 04:03:36.562408  512568 main.go:141] libmachine: (newest-cni-889166) DBG | domain newest-cni-889166 has defined MAC address 52:54:00:0d:88:aa in network mk-newest-cni-889166
	I0116 04:03:36.563087  512568 main.go:141] libmachine: (newest-cni-889166) DBG | unable to find current IP address of domain newest-cni-889166 in network mk-newest-cni-889166
	I0116 04:03:36.563126  512568 main.go:141] libmachine: (newest-cni-889166) DBG | I0116 04:03:36.563021  512590 retry.go:31] will retry after 758.160307ms: waiting for machine to come up
	I0116 04:03:37.322450  512568 main.go:141] libmachine: (newest-cni-889166) DBG | domain newest-cni-889166 has defined MAC address 52:54:00:0d:88:aa in network mk-newest-cni-889166
	I0116 04:03:37.322884  512568 main.go:141] libmachine: (newest-cni-889166) DBG | unable to find current IP address of domain newest-cni-889166 in network mk-newest-cni-889166
	I0116 04:03:37.322909  512568 main.go:141] libmachine: (newest-cni-889166) DBG | I0116 04:03:37.322819  512590 retry.go:31] will retry after 694.798856ms: waiting for machine to come up
	I0116 04:03:38.019568  512568 main.go:141] libmachine: (newest-cni-889166) DBG | domain newest-cni-889166 has defined MAC address 52:54:00:0d:88:aa in network mk-newest-cni-889166
	I0116 04:03:38.019972  512568 main.go:141] libmachine: (newest-cni-889166) DBG | unable to find current IP address of domain newest-cni-889166 in network mk-newest-cni-889166
	I0116 04:03:38.020002  512568 main.go:141] libmachine: (newest-cni-889166) DBG | I0116 04:03:38.019928  512590 retry.go:31] will retry after 857.291393ms: waiting for machine to come up
	I0116 04:03:38.878999  512568 main.go:141] libmachine: (newest-cni-889166) DBG | domain newest-cni-889166 has defined MAC address 52:54:00:0d:88:aa in network mk-newest-cni-889166
	I0116 04:03:38.879544  512568 main.go:141] libmachine: (newest-cni-889166) DBG | unable to find current IP address of domain newest-cni-889166 in network mk-newest-cni-889166
	I0116 04:03:38.879597  512568 main.go:141] libmachine: (newest-cni-889166) DBG | I0116 04:03:38.879469  512590 retry.go:31] will retry after 979.410603ms: waiting for machine to come up
	I0116 04:03:39.860984  512568 main.go:141] libmachine: (newest-cni-889166) DBG | domain newest-cni-889166 has defined MAC address 52:54:00:0d:88:aa in network mk-newest-cni-889166
	I0116 04:03:39.861480  512568 main.go:141] libmachine: (newest-cni-889166) DBG | unable to find current IP address of domain newest-cni-889166 in network mk-newest-cni-889166
	I0116 04:03:39.861511  512568 main.go:141] libmachine: (newest-cni-889166) DBG | I0116 04:03:39.861407  512590 retry.go:31] will retry after 1.440261013s: waiting for machine to come up
	I0116 04:03:41.303111  512568 main.go:141] libmachine: (newest-cni-889166) DBG | domain newest-cni-889166 has defined MAC address 52:54:00:0d:88:aa in network mk-newest-cni-889166
	I0116 04:03:41.303718  512568 main.go:141] libmachine: (newest-cni-889166) DBG | unable to find current IP address of domain newest-cni-889166 in network mk-newest-cni-889166
	I0116 04:03:41.303747  512568 main.go:141] libmachine: (newest-cni-889166) DBG | I0116 04:03:41.303655  512590 retry.go:31] will retry after 1.489358658s: waiting for machine to come up
	I0116 04:03:42.795019  512568 main.go:141] libmachine: (newest-cni-889166) DBG | domain newest-cni-889166 has defined MAC address 52:54:00:0d:88:aa in network mk-newest-cni-889166
	I0116 04:03:42.795581  512568 main.go:141] libmachine: (newest-cni-889166) DBG | unable to find current IP address of domain newest-cni-889166 in network mk-newest-cni-889166
	I0116 04:03:42.795613  512568 main.go:141] libmachine: (newest-cni-889166) DBG | I0116 04:03:42.795483  512590 retry.go:31] will retry after 2.4742819s: waiting for machine to come up
	
	
	==> CRI-O <==
	-- Journal begins at Tue 2024-01-16 03:43:14 UTC, ends at Tue 2024-01-16 04:03:47 UTC. --
	Jan 16 04:03:47 no-preload-666547 crio[708]: time="2024-01-16 04:03:47.700510262Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4f6640f1-ba8d-49af-9713-8c66e99e54cf name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 04:03:47 no-preload-666547 crio[708]: time="2024-01-16 04:03:47.700695603Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b8effe57dcc580d7342341d0d35cd5e26c09ec7ad9caa9eef6f0cd1d2dac7cd9,PodSandboxId:0e795dcf8bdf3a6454fa74aa6c979dedb736fe886bb6577315992cb4b9c012ae,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1705376654821364357,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2aefa743-29a1-416e-be78-70088fafa6ae,},Annotations:map[string]string{io.kubernetes.container.hash: 94ee9ba2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c13ef036a1014a6bbc5c179a0889fb6ff615988713589da58240b7c637adf687,PodSandboxId:7aeadfa43aff8374db2de3bea11ab2f9e1af5b636830272eed8e50690bf6d19b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1705376653300473303,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-lr95b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15dc0b11-f7ec-4729-bbfa-79b9649fbad6,},Annotations:map[string]string{io.kubernetes.container.hash: 8f017cc4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7164c1b7732cf6d54642cb9ffc8d9f16696adc0e428c3fa16cf914420d73e1d,PodSandboxId:9caee2186036cf5d1af54ff074972975db598d01c1c01fc542bece51e9dfc11e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1705376646449512147,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: f4e1ba45-217d-41d5-b583-2f60044879bc,},Annotations:map[string]string{io.kubernetes.container.hash: 19213996,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59754e94eb3cf3fecfedb9077fbad2ecc2618c351fa9bfa3faed0d780fdd9ecb,PodSandboxId:9caee2186036cf5d1af54ff074972975db598d01c1c01fc542bece51e9dfc11e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1705376645286742316,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: f4e1ba45-217d-41d5-b583-2f60044879bc,},Annotations:map[string]string{io.kubernetes.container.hash: 19213996,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eba2964f029ac61d9cfb59dc548ae05c832f9b23713f51924291fbfc5de985ed,PodSandboxId:4f18d218883ecd1534290daa913264acbf65c6e4a8ad219b1d044c0f6d74ab50,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1705376645196881524,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dcmrn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e91c96f-cbc
5-424d-a09e-06e34bf7a2e2,},Annotations:map[string]string{io.kubernetes.container.hash: 97531c65,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33381edd7dded1b5ed158738429b4a7f1f03a6171f2af0832bb5a9864c950725,PodSandboxId:c8e26467ca147bef4373910a371d91fd745bfd4245dc6376ea28d683d6cb2355,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1705376639199150218,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-666547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2443bec62d62ae9acf
9e06442ec207b,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01aaf51cd40b9137f2395404bb15b7372927f81dc8a4e884600dc8cba9bbeb8e,PodSandboxId:0f9fe038b55a26455f4590da34c8e63e98329432435798e09fcfb15225cc873e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1705376639067435928,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-666547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5216174c445390e2fea097e8be444c01,},Annotations:map[string]string{io.ku
bernetes.container.hash: 54326c6b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:802d4c55aa04370ff079f09dfe4670f5dac1142ca6db880afa17be75f1d20a76,PodSandboxId:44000096b31d5b12f18dfbffbab8b31fb45b919c2f1d37d67b235b97d02cf247,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1705376638959826759,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-666547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55f2773f8d96731e38a7898f4239f269,},Annotation
s:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de79f87bc28449077936386f603cc5df933f63455bc96cf6ff7e7c6e4bd32bc4,PodSandboxId:7958f1d33200c86dba5755a1cc3afdc2e3f5ef304384d144976b0b39972f197e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1705376638560110741,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-666547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed86a5f0d67f31d8a75b6d9733aaf4df,},Annotations:map[strin
g]string{io.kubernetes.container.hash: f03ae34,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4f6640f1-ba8d-49af-9713-8c66e99e54cf name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 04:03:47 no-preload-666547 crio[708]: time="2024-01-16 04:03:47.770778168Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=2022f839-a11b-4695-b0d3-bc7364761581 name=/runtime.v1.RuntimeService/Version
	Jan 16 04:03:47 no-preload-666547 crio[708]: time="2024-01-16 04:03:47.770837899Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=2022f839-a11b-4695-b0d3-bc7364761581 name=/runtime.v1.RuntimeService/Version
	Jan 16 04:03:47 no-preload-666547 crio[708]: time="2024-01-16 04:03:47.773333975Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=ce103119-68cd-4acf-9f5e-fdf772665819 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 04:03:47 no-preload-666547 crio[708]: time="2024-01-16 04:03:47.773914012Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705377827773888308,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=ce103119-68cd-4acf-9f5e-fdf772665819 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 04:03:47 no-preload-666547 crio[708]: time="2024-01-16 04:03:47.774900074Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=21a317e1-0fd2-453d-a455-7b362eb5bd4b name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 04:03:47 no-preload-666547 crio[708]: time="2024-01-16 04:03:47.774960625Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=21a317e1-0fd2-453d-a455-7b362eb5bd4b name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 04:03:47 no-preload-666547 crio[708]: time="2024-01-16 04:03:47.775325152Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b8effe57dcc580d7342341d0d35cd5e26c09ec7ad9caa9eef6f0cd1d2dac7cd9,PodSandboxId:0e795dcf8bdf3a6454fa74aa6c979dedb736fe886bb6577315992cb4b9c012ae,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1705376654821364357,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2aefa743-29a1-416e-be78-70088fafa6ae,},Annotations:map[string]string{io.kubernetes.container.hash: 94ee9ba2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c13ef036a1014a6bbc5c179a0889fb6ff615988713589da58240b7c637adf687,PodSandboxId:7aeadfa43aff8374db2de3bea11ab2f9e1af5b636830272eed8e50690bf6d19b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1705376653300473303,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-lr95b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15dc0b11-f7ec-4729-bbfa-79b9649fbad6,},Annotations:map[string]string{io.kubernetes.container.hash: 8f017cc4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7164c1b7732cf6d54642cb9ffc8d9f16696adc0e428c3fa16cf914420d73e1d,PodSandboxId:9caee2186036cf5d1af54ff074972975db598d01c1c01fc542bece51e9dfc11e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1705376646449512147,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: f4e1ba45-217d-41d5-b583-2f60044879bc,},Annotations:map[string]string{io.kubernetes.container.hash: 19213996,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59754e94eb3cf3fecfedb9077fbad2ecc2618c351fa9bfa3faed0d780fdd9ecb,PodSandboxId:9caee2186036cf5d1af54ff074972975db598d01c1c01fc542bece51e9dfc11e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1705376645286742316,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: f4e1ba45-217d-41d5-b583-2f60044879bc,},Annotations:map[string]string{io.kubernetes.container.hash: 19213996,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eba2964f029ac61d9cfb59dc548ae05c832f9b23713f51924291fbfc5de985ed,PodSandboxId:4f18d218883ecd1534290daa913264acbf65c6e4a8ad219b1d044c0f6d74ab50,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1705376645196881524,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dcmrn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e91c96f-cbc
5-424d-a09e-06e34bf7a2e2,},Annotations:map[string]string{io.kubernetes.container.hash: 97531c65,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33381edd7dded1b5ed158738429b4a7f1f03a6171f2af0832bb5a9864c950725,PodSandboxId:c8e26467ca147bef4373910a371d91fd745bfd4245dc6376ea28d683d6cb2355,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1705376639199150218,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-666547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2443bec62d62ae9acf
9e06442ec207b,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01aaf51cd40b9137f2395404bb15b7372927f81dc8a4e884600dc8cba9bbeb8e,PodSandboxId:0f9fe038b55a26455f4590da34c8e63e98329432435798e09fcfb15225cc873e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1705376639067435928,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-666547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5216174c445390e2fea097e8be444c01,},Annotations:map[string]string{io.ku
bernetes.container.hash: 54326c6b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:802d4c55aa04370ff079f09dfe4670f5dac1142ca6db880afa17be75f1d20a76,PodSandboxId:44000096b31d5b12f18dfbffbab8b31fb45b919c2f1d37d67b235b97d02cf247,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1705376638959826759,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-666547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55f2773f8d96731e38a7898f4239f269,},Annotation
s:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de79f87bc28449077936386f603cc5df933f63455bc96cf6ff7e7c6e4bd32bc4,PodSandboxId:7958f1d33200c86dba5755a1cc3afdc2e3f5ef304384d144976b0b39972f197e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1705376638560110741,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-666547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed86a5f0d67f31d8a75b6d9733aaf4df,},Annotations:map[strin
g]string{io.kubernetes.container.hash: f03ae34,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=21a317e1-0fd2-453d-a455-7b362eb5bd4b name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 04:03:47 no-preload-666547 crio[708]: time="2024-01-16 04:03:47.803730636Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="go-grpc-middleware/chain.go:25" id=333ced60-f4b8-4389-b54f-b6eb265f91d7 name=/runtime.v1.RuntimeService/Status
	Jan 16 04:03:47 no-preload-666547 crio[708]: time="2024-01-16 04:03:47.803826127Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=333ced60-f4b8-4389-b54f-b6eb265f91d7 name=/runtime.v1.RuntimeService/Status
	Jan 16 04:03:47 no-preload-666547 crio[708]: time="2024-01-16 04:03:47.826314061Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=524d15bb-0c2f-4d4d-acbc-c62e1c50317b name=/runtime.v1.RuntimeService/Version
	Jan 16 04:03:47 no-preload-666547 crio[708]: time="2024-01-16 04:03:47.826403364Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=524d15bb-0c2f-4d4d-acbc-c62e1c50317b name=/runtime.v1.RuntimeService/Version
	Jan 16 04:03:47 no-preload-666547 crio[708]: time="2024-01-16 04:03:47.828770794Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=9ccae4d6-5058-4b1b-bdcf-9969cad5eefc name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 04:03:47 no-preload-666547 crio[708]: time="2024-01-16 04:03:47.829367961Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705377827829339012,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=9ccae4d6-5058-4b1b-bdcf-9969cad5eefc name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 04:03:47 no-preload-666547 crio[708]: time="2024-01-16 04:03:47.830424979Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=bbddcf6b-73ce-4dc8-bf9a-183962d4e25e name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 04:03:47 no-preload-666547 crio[708]: time="2024-01-16 04:03:47.830581125Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=bbddcf6b-73ce-4dc8-bf9a-183962d4e25e name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 04:03:47 no-preload-666547 crio[708]: time="2024-01-16 04:03:47.830894379Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b8effe57dcc580d7342341d0d35cd5e26c09ec7ad9caa9eef6f0cd1d2dac7cd9,PodSandboxId:0e795dcf8bdf3a6454fa74aa6c979dedb736fe886bb6577315992cb4b9c012ae,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1705376654821364357,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2aefa743-29a1-416e-be78-70088fafa6ae,},Annotations:map[string]string{io.kubernetes.container.hash: 94ee9ba2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c13ef036a1014a6bbc5c179a0889fb6ff615988713589da58240b7c637adf687,PodSandboxId:7aeadfa43aff8374db2de3bea11ab2f9e1af5b636830272eed8e50690bf6d19b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1705376653300473303,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-lr95b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15dc0b11-f7ec-4729-bbfa-79b9649fbad6,},Annotations:map[string]string{io.kubernetes.container.hash: 8f017cc4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7164c1b7732cf6d54642cb9ffc8d9f16696adc0e428c3fa16cf914420d73e1d,PodSandboxId:9caee2186036cf5d1af54ff074972975db598d01c1c01fc542bece51e9dfc11e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1705376646449512147,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: f4e1ba45-217d-41d5-b583-2f60044879bc,},Annotations:map[string]string{io.kubernetes.container.hash: 19213996,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59754e94eb3cf3fecfedb9077fbad2ecc2618c351fa9bfa3faed0d780fdd9ecb,PodSandboxId:9caee2186036cf5d1af54ff074972975db598d01c1c01fc542bece51e9dfc11e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1705376645286742316,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: f4e1ba45-217d-41d5-b583-2f60044879bc,},Annotations:map[string]string{io.kubernetes.container.hash: 19213996,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eba2964f029ac61d9cfb59dc548ae05c832f9b23713f51924291fbfc5de985ed,PodSandboxId:4f18d218883ecd1534290daa913264acbf65c6e4a8ad219b1d044c0f6d74ab50,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1705376645196881524,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dcmrn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e91c96f-cbc
5-424d-a09e-06e34bf7a2e2,},Annotations:map[string]string{io.kubernetes.container.hash: 97531c65,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33381edd7dded1b5ed158738429b4a7f1f03a6171f2af0832bb5a9864c950725,PodSandboxId:c8e26467ca147bef4373910a371d91fd745bfd4245dc6376ea28d683d6cb2355,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1705376639199150218,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-666547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2443bec62d62ae9acf
9e06442ec207b,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01aaf51cd40b9137f2395404bb15b7372927f81dc8a4e884600dc8cba9bbeb8e,PodSandboxId:0f9fe038b55a26455f4590da34c8e63e98329432435798e09fcfb15225cc873e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1705376639067435928,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-666547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5216174c445390e2fea097e8be444c01,},Annotations:map[string]string{io.ku
bernetes.container.hash: 54326c6b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:802d4c55aa04370ff079f09dfe4670f5dac1142ca6db880afa17be75f1d20a76,PodSandboxId:44000096b31d5b12f18dfbffbab8b31fb45b919c2f1d37d67b235b97d02cf247,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1705376638959826759,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-666547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55f2773f8d96731e38a7898f4239f269,},Annotation
s:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de79f87bc28449077936386f603cc5df933f63455bc96cf6ff7e7c6e4bd32bc4,PodSandboxId:7958f1d33200c86dba5755a1cc3afdc2e3f5ef304384d144976b0b39972f197e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1705376638560110741,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-666547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed86a5f0d67f31d8a75b6d9733aaf4df,},Annotations:map[strin
g]string{io.kubernetes.container.hash: f03ae34,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=bbddcf6b-73ce-4dc8-bf9a-183962d4e25e name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 04:03:47 no-preload-666547 crio[708]: time="2024-01-16 04:03:47.884092080Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=82a118e0-f3d5-4709-9823-d0570648a8cc name=/runtime.v1.RuntimeService/Version
	Jan 16 04:03:47 no-preload-666547 crio[708]: time="2024-01-16 04:03:47.884189495Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=82a118e0-f3d5-4709-9823-d0570648a8cc name=/runtime.v1.RuntimeService/Version
	Jan 16 04:03:47 no-preload-666547 crio[708]: time="2024-01-16 04:03:47.886675541Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=96d06194-d99e-420e-973a-cbf0e35e9788 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 04:03:47 no-preload-666547 crio[708]: time="2024-01-16 04:03:47.887386421Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705377827887356754,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=96d06194-d99e-420e-973a-cbf0e35e9788 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 04:03:47 no-preload-666547 crio[708]: time="2024-01-16 04:03:47.888208933Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=ad28a975-eeca-4946-87cb-0a5b02c6f7cb name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 04:03:47 no-preload-666547 crio[708]: time="2024-01-16 04:03:47.888277484Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=ad28a975-eeca-4946-87cb-0a5b02c6f7cb name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 04:03:47 no-preload-666547 crio[708]: time="2024-01-16 04:03:47.888560038Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b8effe57dcc580d7342341d0d35cd5e26c09ec7ad9caa9eef6f0cd1d2dac7cd9,PodSandboxId:0e795dcf8bdf3a6454fa74aa6c979dedb736fe886bb6577315992cb4b9c012ae,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1705376654821364357,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2aefa743-29a1-416e-be78-70088fafa6ae,},Annotations:map[string]string{io.kubernetes.container.hash: 94ee9ba2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c13ef036a1014a6bbc5c179a0889fb6ff615988713589da58240b7c637adf687,PodSandboxId:7aeadfa43aff8374db2de3bea11ab2f9e1af5b636830272eed8e50690bf6d19b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1705376653300473303,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-lr95b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15dc0b11-f7ec-4729-bbfa-79b9649fbad6,},Annotations:map[string]string{io.kubernetes.container.hash: 8f017cc4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7164c1b7732cf6d54642cb9ffc8d9f16696adc0e428c3fa16cf914420d73e1d,PodSandboxId:9caee2186036cf5d1af54ff074972975db598d01c1c01fc542bece51e9dfc11e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1705376646449512147,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: f4e1ba45-217d-41d5-b583-2f60044879bc,},Annotations:map[string]string{io.kubernetes.container.hash: 19213996,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59754e94eb3cf3fecfedb9077fbad2ecc2618c351fa9bfa3faed0d780fdd9ecb,PodSandboxId:9caee2186036cf5d1af54ff074972975db598d01c1c01fc542bece51e9dfc11e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1705376645286742316,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: f4e1ba45-217d-41d5-b583-2f60044879bc,},Annotations:map[string]string{io.kubernetes.container.hash: 19213996,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eba2964f029ac61d9cfb59dc548ae05c832f9b23713f51924291fbfc5de985ed,PodSandboxId:4f18d218883ecd1534290daa913264acbf65c6e4a8ad219b1d044c0f6d74ab50,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1705376645196881524,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dcmrn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e91c96f-cbc
5-424d-a09e-06e34bf7a2e2,},Annotations:map[string]string{io.kubernetes.container.hash: 97531c65,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33381edd7dded1b5ed158738429b4a7f1f03a6171f2af0832bb5a9864c950725,PodSandboxId:c8e26467ca147bef4373910a371d91fd745bfd4245dc6376ea28d683d6cb2355,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1705376639199150218,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-666547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2443bec62d62ae9acf
9e06442ec207b,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01aaf51cd40b9137f2395404bb15b7372927f81dc8a4e884600dc8cba9bbeb8e,PodSandboxId:0f9fe038b55a26455f4590da34c8e63e98329432435798e09fcfb15225cc873e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1705376639067435928,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-666547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5216174c445390e2fea097e8be444c01,},Annotations:map[string]string{io.ku
bernetes.container.hash: 54326c6b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:802d4c55aa04370ff079f09dfe4670f5dac1142ca6db880afa17be75f1d20a76,PodSandboxId:44000096b31d5b12f18dfbffbab8b31fb45b919c2f1d37d67b235b97d02cf247,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1705376638959826759,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-666547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55f2773f8d96731e38a7898f4239f269,},Annotation
s:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de79f87bc28449077936386f603cc5df933f63455bc96cf6ff7e7c6e4bd32bc4,PodSandboxId:7958f1d33200c86dba5755a1cc3afdc2e3f5ef304384d144976b0b39972f197e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1705376638560110741,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-666547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed86a5f0d67f31d8a75b6d9733aaf4df,},Annotations:map[strin
g]string{io.kubernetes.container.hash: f03ae34,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=ad28a975-eeca-4946-87cb-0a5b02c6f7cb name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b8effe57dcc58       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   19 minutes ago      Running             busybox                   1                   0e795dcf8bdf3       busybox
	c13ef036a1014       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      19 minutes ago      Running             coredns                   1                   7aeadfa43aff8       coredns-76f75df574-lr95b
	b7164c1b7732c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      19 minutes ago      Running             storage-provisioner       3                   9caee2186036c       storage-provisioner
	59754e94eb3cf       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      19 minutes ago      Exited              storage-provisioner       2                   9caee2186036c       storage-provisioner
	eba2964f029ac       cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834                                      19 minutes ago      Running             kube-proxy                1                   4f18d218883ec       kube-proxy-dcmrn
	33381edd7dded       4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210                                      19 minutes ago      Running             kube-scheduler            1                   c8e26467ca147       kube-scheduler-no-preload-666547
	01aaf51cd40b9       a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7                                      19 minutes ago      Running             etcd                      1                   0f9fe038b55a2       etcd-no-preload-666547
	802d4c55aa043       d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d                                      19 minutes ago      Running             kube-controller-manager   1                   44000096b31d5       kube-controller-manager-no-preload-666547
	de79f87bc2844       bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f                                      19 minutes ago      Running             kube-apiserver            1                   7958f1d33200c       kube-apiserver-no-preload-666547
	
	
	==> coredns [c13ef036a1014a6bbc5c179a0889fb6ff615988713589da58240b7c637adf687] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:32801 - 5485 "HINFO IN 1722860781792914362.6159803807488865474. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012158586s
	
	
	==> describe nodes <==
	Name:               no-preload-666547
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-666547
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e8fa5f64d0e7272be43ff25ed3826261f0a2578
	                    minikube.k8s.io/name=no-preload-666547
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_16T03_35_21_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Jan 2024 03:35:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-666547
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Jan 2024 04:03:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Jan 2024 03:59:52 +0000   Tue, 16 Jan 2024 03:35:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Jan 2024 03:59:52 +0000   Tue, 16 Jan 2024 03:35:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Jan 2024 03:59:52 +0000   Tue, 16 Jan 2024 03:35:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Jan 2024 03:59:52 +0000   Tue, 16 Jan 2024 03:44:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.103
	  Hostname:    no-preload-666547
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 9c1e40cee5004b05ba377b950d8ae425
	  System UUID:                9c1e40ce-e500-4b05-ba37-7b950d8ae425
	  Boot ID:                    9cfa70da-65ac-486c-ae2b-6c40e448f263
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.29.0-rc.2
	  Kube-Proxy Version:         v1.29.0-rc.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 coredns-76f75df574-lr95b                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     28m
	  kube-system                 etcd-no-preload-666547                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 kube-apiserver-no-preload-666547             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-controller-manager-no-preload-666547    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-proxy-dcmrn                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-scheduler-no-preload-666547             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 metrics-server-57f55c9bc5-78vfj              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 19m                kube-proxy       
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  28m (x8 over 28m)  kubelet          Node no-preload-666547 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m (x8 over 28m)  kubelet          Node no-preload-666547 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28m (x7 over 28m)  kubelet          Node no-preload-666547 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     28m                kubelet          Node no-preload-666547 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  28m                kubelet          Node no-preload-666547 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m                kubelet          Node no-preload-666547 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                28m                kubelet          Node no-preload-666547 status is now: NodeReady
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           28m                node-controller  Node no-preload-666547 event: Registered Node no-preload-666547 in Controller
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node no-preload-666547 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node no-preload-666547 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x7 over 19m)  kubelet          Node no-preload-666547 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           19m                node-controller  Node no-preload-666547 event: Registered Node no-preload-666547 in Controller
	
	
	==> dmesg <==
	[Jan16 03:43] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.069486] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.432487] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.513884] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.156831] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.460432] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.499600] systemd-fstab-generator[635]: Ignoring "noauto" for root device
	[  +0.110367] systemd-fstab-generator[646]: Ignoring "noauto" for root device
	[  +0.154550] systemd-fstab-generator[659]: Ignoring "noauto" for root device
	[  +0.129392] systemd-fstab-generator[670]: Ignoring "noauto" for root device
	[  +0.249042] systemd-fstab-generator[694]: Ignoring "noauto" for root device
	[ +29.681110] systemd-fstab-generator[1321]: Ignoring "noauto" for root device
	[Jan16 03:44] kauditd_printk_skb: 19 callbacks suppressed
	[  +1.405784] hrtimer: interrupt took 2824011 ns
	
	
	==> etcd [01aaf51cd40b9137f2395404bb15b7372927f81dc8a4e884600dc8cba9bbeb8e] <==
	{"level":"info","ts":"2024-01-16T03:44:01.976507Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-16T03:44:01.976149Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-16T03:44:01.980748Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-16T03:44:01.983245Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.103:2379"}
	{"level":"warn","ts":"2024-01-16T03:44:18.740021Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":16244076007415259520,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-01-16T03:44:18.837943Z","caller":"traceutil/trace.go:171","msg":"trace[146986157] transaction","detail":"{read_only:false; response_revision:601; number_of_response:1; }","duration":"825.750902ms","start":"2024-01-16T03:44:18.011551Z","end":"2024-01-16T03:44:18.837302Z","steps":["trace[146986157] 'process raft request'  (duration: 824.840142ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T03:44:18.839048Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-16T03:44:18.011527Z","time spent":"826.602609ms","remote":"127.0.0.1:57432","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5422,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/etcd-no-preload-666547\" mod_revision:499 > success:<request_put:<key:\"/registry/pods/kube-system/etcd-no-preload-666547\" value_size:5365 >> failure:<request_range:<key:\"/registry/pods/kube-system/etcd-no-preload-666547\" > >"}
	{"level":"warn","ts":"2024-01-16T03:44:19.426726Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"461.972682ms","expected-duration":"100ms","prefix":"","request":"header:<ID:16244076007415259522 > lease_revoke:<id:616e8d10566be24b>","response":"size:28"}
	{"level":"info","ts":"2024-01-16T03:44:19.426843Z","caller":"traceutil/trace.go:171","msg":"trace[1835654471] linearizableReadLoop","detail":"{readStateIndex:639; appliedIndex:637; }","duration":"1.187362277s","start":"2024-01-16T03:44:18.239469Z","end":"2024-01-16T03:44:19.426831Z","steps":["trace[1835654471] 'read index received'  (duration: 597.143781ms)","trace[1835654471] 'applied index is now lower than readState.Index'  (duration: 590.217714ms)"],"step_count":2}
	{"level":"warn","ts":"2024-01-16T03:44:19.426959Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.18753464s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-no-preload-666547\" ","response":"range_response_count:1 size:5437"}
	{"level":"info","ts":"2024-01-16T03:44:19.427044Z","caller":"traceutil/trace.go:171","msg":"trace[482028468] range","detail":"{range_begin:/registry/pods/kube-system/etcd-no-preload-666547; range_end:; response_count:1; response_revision:601; }","duration":"1.187668348s","start":"2024-01-16T03:44:18.239368Z","end":"2024-01-16T03:44:19.427036Z","steps":["trace[482028468] 'agreement among raft nodes before linearized reading'  (duration: 1.187533625s)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T03:44:19.427073Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-16T03:44:18.239353Z","time spent":"1.187712721s","remote":"127.0.0.1:57432","response type":"/etcdserverpb.KV/Range","request count":0,"request size":51,"response count":1,"response size":5460,"request content":"key:\"/registry/pods/kube-system/etcd-no-preload-666547\" "}
	{"level":"warn","ts":"2024-01-16T03:44:19.427212Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"582.552768ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-16T03:44:19.427259Z","caller":"traceutil/trace.go:171","msg":"trace[1839323139] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:601; }","duration":"582.635465ms","start":"2024-01-16T03:44:18.844613Z","end":"2024-01-16T03:44:19.427248Z","steps":["trace[1839323139] 'agreement among raft nodes before linearized reading'  (duration: 582.545923ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T03:44:19.427285Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-16T03:44:18.844596Z","time spent":"582.683283ms","remote":"127.0.0.1:57382","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2024-01-16T03:44:19.935524Z","caller":"traceutil/trace.go:171","msg":"trace[1490785773] transaction","detail":"{read_only:false; response_revision:602; number_of_response:1; }","duration":"211.600552ms","start":"2024-01-16T03:44:19.723903Z","end":"2024-01-16T03:44:19.935504Z","steps":["trace[1490785773] 'process raft request'  (duration: 211.496267ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T03:44:20.248791Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"215.881473ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/no-preload-666547\" ","response":"range_response_count:1 size:4441"}
	{"level":"info","ts":"2024-01-16T03:44:20.248957Z","caller":"traceutil/trace.go:171","msg":"trace[830674530] range","detail":"{range_begin:/registry/minions/no-preload-666547; range_end:; response_count:1; response_revision:602; }","duration":"216.061167ms","start":"2024-01-16T03:44:20.032879Z","end":"2024-01-16T03:44:20.24894Z","steps":["trace[830674530] 'range keys from in-memory index tree'  (duration: 215.769347ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-16T03:44:42.348184Z","caller":"traceutil/trace.go:171","msg":"trace[1812864166] transaction","detail":"{read_only:false; response_revision:627; number_of_response:1; }","duration":"201.73351ms","start":"2024-01-16T03:44:42.146423Z","end":"2024-01-16T03:44:42.348156Z","steps":["trace[1812864166] 'process raft request'  (duration: 201.486143ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-16T03:54:02.029354Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":856}
	{"level":"info","ts":"2024-01-16T03:54:02.033764Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":856,"took":"3.283065ms","hash":4264023929}
	{"level":"info","ts":"2024-01-16T03:54:02.033922Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4264023929,"revision":856,"compact-revision":-1}
	{"level":"info","ts":"2024-01-16T03:59:02.037607Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1098}
	{"level":"info","ts":"2024-01-16T03:59:02.039964Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1098,"took":"1.652767ms","hash":2589827037}
	{"level":"info","ts":"2024-01-16T03:59:02.040186Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2589827037,"revision":1098,"compact-revision":856}
	
	
	==> kernel <==
	 04:03:48 up 20 min,  0 users,  load average: 0.25, 0.25, 0.18
	Linux no-preload-666547 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [de79f87bc28449077936386f603cc5df933f63455bc96cf6ff7e7c6e4bd32bc4] <==
	I0116 03:57:04.516250       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0116 03:59:03.519697       1 handler_proxy.go:93] no RequestInfo found in the context
	E0116 03:59:03.520129       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0116 03:59:04.521144       1 handler_proxy.go:93] no RequestInfo found in the context
	W0116 03:59:04.521261       1 handler_proxy.go:93] no RequestInfo found in the context
	E0116 03:59:04.521399       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0116 03:59:04.521437       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0116 03:59:04.521461       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0116 03:59:04.522641       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0116 04:00:04.521833       1 handler_proxy.go:93] no RequestInfo found in the context
	E0116 04:00:04.522650       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0116 04:00:04.522703       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0116 04:00:04.522756       1 handler_proxy.go:93] no RequestInfo found in the context
	E0116 04:00:04.522822       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0116 04:00:04.524509       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0116 04:02:04.523394       1 handler_proxy.go:93] no RequestInfo found in the context
	E0116 04:02:04.523500       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0116 04:02:04.523510       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0116 04:02:04.524753       1 handler_proxy.go:93] no RequestInfo found in the context
	E0116 04:02:04.524937       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0116 04:02:04.525073       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [802d4c55aa04370ff079f09dfe4670f5dac1142ca6db880afa17be75f1d20a76] <==
	I0116 03:58:17.219523       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 03:58:46.658424       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:58:47.228569       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 03:59:16.664946       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:59:17.238185       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 03:59:46.670116       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:59:47.249848       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 04:00:16.684448       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 04:00:17.260441       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0116 04:00:27.370159       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="320.438µs"
	I0116 04:00:40.370255       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="82.024µs"
	E0116 04:00:46.691484       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 04:00:47.273842       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 04:01:16.700194       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 04:01:17.283935       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 04:01:46.707532       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 04:01:47.295733       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 04:02:16.717226       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 04:02:17.308964       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 04:02:46.722966       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 04:02:47.319490       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 04:03:16.728799       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 04:03:17.331534       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 04:03:46.735217       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 04:03:47.344915       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [eba2964f029ac61d9cfb59dc548ae05c832f9b23713f51924291fbfc5de985ed] <==
	I0116 03:44:05.509703       1 server_others.go:72] "Using iptables proxy"
	I0116 03:44:05.584745       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.103"]
	I0116 03:44:05.724180       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0116 03:44:05.724250       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0116 03:44:05.724277       1 server_others.go:168] "Using iptables Proxier"
	I0116 03:44:05.736231       1 proxier.go:246] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0116 03:44:05.737555       1 server.go:865] "Version info" version="v1.29.0-rc.2"
	I0116 03:44:05.737921       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0116 03:44:05.739244       1 config.go:188] "Starting service config controller"
	I0116 03:44:05.740540       1 config.go:97] "Starting endpoint slice config controller"
	I0116 03:44:05.740676       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0116 03:44:05.742500       1 config.go:315] "Starting node config controller"
	I0116 03:44:05.742647       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0116 03:44:05.743524       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0116 03:44:05.746050       1 shared_informer.go:318] Caches are synced for service config
	I0116 03:44:05.841519       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0116 03:44:05.843188       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [33381edd7dded1b5ed158738429b4a7f1f03a6171f2af0832bb5a9864c950725] <==
	I0116 03:44:00.780699       1 serving.go:380] Generated self-signed cert in-memory
	W0116 03:44:03.501740       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0116 03:44:03.508832       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0116 03:44:03.509405       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0116 03:44:03.509440       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0116 03:44:03.552526       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.0-rc.2"
	I0116 03:44:03.553299       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0116 03:44:03.556538       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0116 03:44:03.556713       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0116 03:44:03.557413       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0116 03:44:03.557514       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0116 03:44:03.657325       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-01-16 03:43:14 UTC, ends at Tue 2024-01-16 04:03:48 UTC. --
	Jan 16 04:00:57 no-preload-666547 kubelet[1327]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 16 04:00:57 no-preload-666547 kubelet[1327]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 16 04:00:57 no-preload-666547 kubelet[1327]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 16 04:01:07 no-preload-666547 kubelet[1327]: E0116 04:01:07.350586    1327 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-78vfj" podUID="dbd2d3b2-ec0f-4253-8549-7c4299522c37"
	Jan 16 04:01:18 no-preload-666547 kubelet[1327]: E0116 04:01:18.351349    1327 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-78vfj" podUID="dbd2d3b2-ec0f-4253-8549-7c4299522c37"
	Jan 16 04:01:29 no-preload-666547 kubelet[1327]: E0116 04:01:29.351885    1327 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-78vfj" podUID="dbd2d3b2-ec0f-4253-8549-7c4299522c37"
	Jan 16 04:01:44 no-preload-666547 kubelet[1327]: E0116 04:01:44.350636    1327 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-78vfj" podUID="dbd2d3b2-ec0f-4253-8549-7c4299522c37"
	Jan 16 04:01:55 no-preload-666547 kubelet[1327]: E0116 04:01:55.352388    1327 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-78vfj" podUID="dbd2d3b2-ec0f-4253-8549-7c4299522c37"
	Jan 16 04:01:57 no-preload-666547 kubelet[1327]: E0116 04:01:57.370299    1327 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 16 04:01:57 no-preload-666547 kubelet[1327]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 16 04:01:57 no-preload-666547 kubelet[1327]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 16 04:01:57 no-preload-666547 kubelet[1327]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 16 04:02:06 no-preload-666547 kubelet[1327]: E0116 04:02:06.350714    1327 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-78vfj" podUID="dbd2d3b2-ec0f-4253-8549-7c4299522c37"
	Jan 16 04:02:21 no-preload-666547 kubelet[1327]: E0116 04:02:21.350723    1327 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-78vfj" podUID="dbd2d3b2-ec0f-4253-8549-7c4299522c37"
	Jan 16 04:02:32 no-preload-666547 kubelet[1327]: E0116 04:02:32.351206    1327 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-78vfj" podUID="dbd2d3b2-ec0f-4253-8549-7c4299522c37"
	Jan 16 04:02:44 no-preload-666547 kubelet[1327]: E0116 04:02:44.351303    1327 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-78vfj" podUID="dbd2d3b2-ec0f-4253-8549-7c4299522c37"
	Jan 16 04:02:56 no-preload-666547 kubelet[1327]: E0116 04:02:56.356908    1327 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-78vfj" podUID="dbd2d3b2-ec0f-4253-8549-7c4299522c37"
	Jan 16 04:02:57 no-preload-666547 kubelet[1327]: E0116 04:02:57.367757    1327 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 16 04:02:57 no-preload-666547 kubelet[1327]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 16 04:02:57 no-preload-666547 kubelet[1327]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 16 04:02:57 no-preload-666547 kubelet[1327]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 16 04:03:07 no-preload-666547 kubelet[1327]: E0116 04:03:07.351631    1327 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-78vfj" podUID="dbd2d3b2-ec0f-4253-8549-7c4299522c37"
	Jan 16 04:03:20 no-preload-666547 kubelet[1327]: E0116 04:03:20.353569    1327 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-78vfj" podUID="dbd2d3b2-ec0f-4253-8549-7c4299522c37"
	Jan 16 04:03:34 no-preload-666547 kubelet[1327]: E0116 04:03:34.351295    1327 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-78vfj" podUID="dbd2d3b2-ec0f-4253-8549-7c4299522c37"
	Jan 16 04:03:48 no-preload-666547 kubelet[1327]: E0116 04:03:48.350571    1327 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-78vfj" podUID="dbd2d3b2-ec0f-4253-8549-7c4299522c37"
	
	
	==> storage-provisioner [59754e94eb3cf3fecfedb9077fbad2ecc2618c351fa9bfa3faed0d780fdd9ecb] <==
	I0116 03:44:05.591829       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0116 03:44:05.600646       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [b7164c1b7732cf6d54642cb9ffc8d9f16696adc0e428c3fa16cf914420d73e1d] <==
	I0116 03:44:06.558064       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0116 03:44:06.578351       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0116 03:44:06.578637       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0116 03:44:23.994278       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0116 03:44:23.996617       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-666547_13a9b6af-a490-4224-8262-906d79382357!
	I0116 03:44:23.994551       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"46fad042-e0ea-4026-b131-dabb6c9f6332", APIVersion:"v1", ResourceVersion:"607", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-666547_13a9b6af-a490-4224-8262-906d79382357 became leader
	I0116 03:44:24.097641       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-666547_13a9b6af-a490-4224-8262-906d79382357!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-666547 -n no-preload-666547
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-666547 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-78vfj
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-666547 describe pod metrics-server-57f55c9bc5-78vfj
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-666547 describe pod metrics-server-57f55c9bc5-78vfj: exit status 1 (73.527177ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-78vfj" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-666547 describe pod metrics-server-57f55c9bc5-78vfj: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (372.96s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (503.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0116 03:58:12.213870  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/functional-193417/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-434445 -n default-k8s-diff-port-434445
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-01-16 04:06:29.256609446 +0000 UTC m=+5533.194832056
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-434445 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-434445 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.73µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-434445 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-434445 -n default-k8s-diff-port-434445
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-434445 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-434445 logs -n 25: (1.704250831s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p kindnet-087557 sudo                               | kindnet-087557            | jenkins | v1.32.0 | 16 Jan 24 04:06 UTC | 16 Jan 24 04:06 UTC |
	|         | systemctl status kubelet --all                       |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p kindnet-087557 sudo                               | kindnet-087557            | jenkins | v1.32.0 | 16 Jan 24 04:06 UTC | 16 Jan 24 04:06 UTC |
	|         | systemctl cat kubelet                                |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-087557 sudo                               | kindnet-087557            | jenkins | v1.32.0 | 16 Jan 24 04:06 UTC | 16 Jan 24 04:06 UTC |
	|         | journalctl -xeu kubelet --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p kindnet-087557 sudo cat                           | kindnet-087557            | jenkins | v1.32.0 | 16 Jan 24 04:06 UTC | 16 Jan 24 04:06 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p kindnet-087557 sudo cat                           | kindnet-087557            | jenkins | v1.32.0 | 16 Jan 24 04:06 UTC | 16 Jan 24 04:06 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p kindnet-087557 sudo                               | kindnet-087557            | jenkins | v1.32.0 | 16 Jan 24 04:06 UTC |                     |
	|         | systemctl status docker --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p kindnet-087557 sudo                               | kindnet-087557            | jenkins | v1.32.0 | 16 Jan 24 04:06 UTC | 16 Jan 24 04:06 UTC |
	|         | systemctl cat docker                                 |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-087557 sudo cat                           | kindnet-087557            | jenkins | v1.32.0 | 16 Jan 24 04:06 UTC | 16 Jan 24 04:06 UTC |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p kindnet-087557 sudo docker                        | kindnet-087557            | jenkins | v1.32.0 | 16 Jan 24 04:06 UTC |                     |
	|         | system info                                          |                           |         |         |                     |                     |
	| ssh     | -p kindnet-087557 sudo                               | kindnet-087557            | jenkins | v1.32.0 | 16 Jan 24 04:06 UTC |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p kindnet-087557 sudo                               | kindnet-087557            | jenkins | v1.32.0 | 16 Jan 24 04:06 UTC | 16 Jan 24 04:06 UTC |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-087557 sudo cat                           | kindnet-087557            | jenkins | v1.32.0 | 16 Jan 24 04:06 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p kindnet-087557 sudo cat                           | kindnet-087557            | jenkins | v1.32.0 | 16 Jan 24 04:06 UTC | 16 Jan 24 04:06 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-087557 sudo                               | kindnet-087557            | jenkins | v1.32.0 | 16 Jan 24 04:06 UTC | 16 Jan 24 04:06 UTC |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p kindnet-087557 sudo                               | kindnet-087557            | jenkins | v1.32.0 | 16 Jan 24 04:06 UTC |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p kindnet-087557 sudo                               | kindnet-087557            | jenkins | v1.32.0 | 16 Jan 24 04:06 UTC | 16 Jan 24 04:06 UTC |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-087557 sudo cat                           | kindnet-087557            | jenkins | v1.32.0 | 16 Jan 24 04:06 UTC | 16 Jan 24 04:06 UTC |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p kindnet-087557 sudo cat                           | kindnet-087557            | jenkins | v1.32.0 | 16 Jan 24 04:06 UTC | 16 Jan 24 04:06 UTC |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p kindnet-087557 sudo                               | kindnet-087557            | jenkins | v1.32.0 | 16 Jan 24 04:06 UTC | 16 Jan 24 04:06 UTC |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p kindnet-087557 sudo                               | kindnet-087557            | jenkins | v1.32.0 | 16 Jan 24 04:06 UTC | 16 Jan 24 04:06 UTC |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p kindnet-087557 sudo                               | kindnet-087557            | jenkins | v1.32.0 | 16 Jan 24 04:06 UTC | 16 Jan 24 04:06 UTC |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p kindnet-087557 sudo find                          | kindnet-087557            | jenkins | v1.32.0 | 16 Jan 24 04:06 UTC | 16 Jan 24 04:06 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p kindnet-087557 sudo crio                          | kindnet-087557            | jenkins | v1.32.0 | 16 Jan 24 04:06 UTC | 16 Jan 24 04:06 UTC |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p kindnet-087557                                    | kindnet-087557            | jenkins | v1.32.0 | 16 Jan 24 04:06 UTC | 16 Jan 24 04:06 UTC |
	| start   | -p enable-default-cni-087557                         | enable-default-cni-087557 | jenkins | v1.32.0 | 16 Jan 24 04:06 UTC |                     |
	|         | --memory=3072                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                           |         |         |                     |                     |
	|         | --enable-default-cni=true                            |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/16 04:06:14
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0116 04:06:14.073926  517467 out.go:296] Setting OutFile to fd 1 ...
	I0116 04:06:14.074050  517467 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 04:06:14.074059  517467 out.go:309] Setting ErrFile to fd 2...
	I0116 04:06:14.074064  517467 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 04:06:14.074259  517467 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17965-468241/.minikube/bin
	I0116 04:06:14.074897  517467 out.go:303] Setting JSON to false
	I0116 04:06:14.076103  517467 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":17326,"bootTime":1705360648,"procs":289,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0116 04:06:14.076184  517467 start.go:138] virtualization: kvm guest
	I0116 04:06:14.078890  517467 out.go:177] * [enable-default-cni-087557] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0116 04:06:14.080409  517467 out.go:177]   - MINIKUBE_LOCATION=17965
	I0116 04:06:14.080481  517467 notify.go:220] Checking for updates...
	I0116 04:06:14.081998  517467 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 04:06:14.083594  517467 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17965-468241/kubeconfig
	I0116 04:06:14.084943  517467 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17965-468241/.minikube
	I0116 04:06:14.086353  517467 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0116 04:06:14.087753  517467 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0116 04:06:14.089752  517467 config.go:182] Loaded profile config "calico-087557": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 04:06:14.089871  517467 config.go:182] Loaded profile config "custom-flannel-087557": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 04:06:14.089977  517467 config.go:182] Loaded profile config "default-k8s-diff-port-434445": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 04:06:14.090095  517467 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 04:06:14.130613  517467 out.go:177] * Using the kvm2 driver based on user configuration
	I0116 04:06:14.131995  517467 start.go:298] selected driver: kvm2
	I0116 04:06:14.132016  517467 start.go:902] validating driver "kvm2" against <nil>
	I0116 04:06:14.132051  517467 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0116 04:06:14.133124  517467 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 04:06:14.133230  517467 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17965-468241/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0116 04:06:14.149601  517467 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0116 04:06:14.149659  517467 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	E0116 04:06:14.149872  517467 start_flags.go:463] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0116 04:06:14.149893  517467 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0116 04:06:14.149955  517467 cni.go:84] Creating CNI manager for "bridge"
	I0116 04:06:14.149969  517467 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0116 04:06:14.149977  517467 start_flags.go:321] config:
	{Name:enable-default-cni-087557 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:enable-default-cni-087557 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 04:06:14.150125  517467 iso.go:125] acquiring lock: {Name:mk2f3231f0eeeb23816ac363851489181398d1c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 04:06:14.152253  517467 out.go:177] * Starting control plane node enable-default-cni-087557 in cluster enable-default-cni-087557
	I0116 04:06:12.833796  515760 main.go:141] libmachine: (calico-087557) DBG | domain calico-087557 has defined MAC address 52:54:00:4e:80:99 in network mk-calico-087557
	I0116 04:06:12.834361  515760 main.go:141] libmachine: (calico-087557) Found IP for machine: 192.168.61.99
	I0116 04:06:12.834390  515760 main.go:141] libmachine: (calico-087557) Reserving static IP address...
	I0116 04:06:12.834409  515760 main.go:141] libmachine: (calico-087557) DBG | domain calico-087557 has current primary IP address 192.168.61.99 and MAC address 52:54:00:4e:80:99 in network mk-calico-087557
	I0116 04:06:12.834809  515760 main.go:141] libmachine: (calico-087557) DBG | unable to find host DHCP lease matching {name: "calico-087557", mac: "52:54:00:4e:80:99", ip: "192.168.61.99"} in network mk-calico-087557
	I0116 04:06:12.925056  515760 main.go:141] libmachine: (calico-087557) DBG | Getting to WaitForSSH function...
	I0116 04:06:12.925096  515760 main.go:141] libmachine: (calico-087557) Reserved static IP address: 192.168.61.99
	I0116 04:06:12.925110  515760 main.go:141] libmachine: (calico-087557) Waiting for SSH to be available...
	I0116 04:06:12.928338  515760 main.go:141] libmachine: (calico-087557) DBG | domain calico-087557 has defined MAC address 52:54:00:4e:80:99 in network mk-calico-087557
	I0116 04:06:12.928714  515760 main.go:141] libmachine: (calico-087557) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:4e:80:99", ip: ""} in network mk-calico-087557
	I0116 04:06:12.928751  515760 main.go:141] libmachine: (calico-087557) DBG | unable to find defined IP address of network mk-calico-087557 interface with MAC address 52:54:00:4e:80:99
	I0116 04:06:12.928864  515760 main.go:141] libmachine: (calico-087557) DBG | Using SSH client type: external
	I0116 04:06:12.928892  515760 main.go:141] libmachine: (calico-087557) DBG | Using SSH private key: /home/jenkins/minikube-integration/17965-468241/.minikube/machines/calico-087557/id_rsa (-rw-------)
	I0116 04:06:12.928929  515760 main.go:141] libmachine: (calico-087557) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17965-468241/.minikube/machines/calico-087557/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0116 04:06:12.928946  515760 main.go:141] libmachine: (calico-087557) DBG | About to run SSH command:
	I0116 04:06:12.928962  515760 main.go:141] libmachine: (calico-087557) DBG | exit 0
	I0116 04:06:12.933095  515760 main.go:141] libmachine: (calico-087557) DBG | SSH cmd err, output: exit status 255: 
	I0116 04:06:12.933122  515760 main.go:141] libmachine: (calico-087557) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0116 04:06:12.933130  515760 main.go:141] libmachine: (calico-087557) DBG | command : exit 0
	I0116 04:06:12.933136  515760 main.go:141] libmachine: (calico-087557) DBG | err     : exit status 255
	I0116 04:06:12.933160  515760 main.go:141] libmachine: (calico-087557) DBG | output  : 
	I0116 04:06:17.661829  516047 start.go:369] acquired machines lock for "custom-flannel-087557" in 25.757900495s
	I0116 04:06:17.661903  516047 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-087557 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.28.4 ClusterName:custom-flannel-087557 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 04:06:17.662080  516047 start.go:125] createHost starting for "" (driver="kvm2")
	I0116 04:06:14.153771  517467 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 04:06:14.153835  517467 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0116 04:06:14.153847  517467 cache.go:56] Caching tarball of preloaded images
	I0116 04:06:14.153959  517467 preload.go:174] Found /home/jenkins/minikube-integration/17965-468241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0116 04:06:14.153970  517467 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0116 04:06:14.154103  517467 profile.go:148] Saving config to /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/enable-default-cni-087557/config.json ...
	I0116 04:06:14.154134  517467 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/enable-default-cni-087557/config.json: {Name:mkf6e2162b5bc3ebdec4c5e8e46b6a79d42f68b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 04:06:14.154289  517467 start.go:365] acquiring machines lock for enable-default-cni-087557: {Name:mk901e5fae8c1a578d989b520053c709bdbdcf06 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0116 04:06:15.933726  515760 main.go:141] libmachine: (calico-087557) DBG | Getting to WaitForSSH function...
	I0116 04:06:15.936464  515760 main.go:141] libmachine: (calico-087557) DBG | domain calico-087557 has defined MAC address 52:54:00:4e:80:99 in network mk-calico-087557
	I0116 04:06:15.936882  515760 main.go:141] libmachine: (calico-087557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:80:99", ip: ""} in network mk-calico-087557: {Iface:virbr3 ExpiryTime:2024-01-16 05:06:05 +0000 UTC Type:0 Mac:52:54:00:4e:80:99 Iaid: IPaddr:192.168.61.99 Prefix:24 Hostname:calico-087557 Clientid:01:52:54:00:4e:80:99}
	I0116 04:06:15.936910  515760 main.go:141] libmachine: (calico-087557) DBG | domain calico-087557 has defined IP address 192.168.61.99 and MAC address 52:54:00:4e:80:99 in network mk-calico-087557
	I0116 04:06:15.937102  515760 main.go:141] libmachine: (calico-087557) DBG | Using SSH client type: external
	I0116 04:06:15.937148  515760 main.go:141] libmachine: (calico-087557) DBG | Using SSH private key: /home/jenkins/minikube-integration/17965-468241/.minikube/machines/calico-087557/id_rsa (-rw-------)
	I0116 04:06:15.937191  515760 main.go:141] libmachine: (calico-087557) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.99 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17965-468241/.minikube/machines/calico-087557/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0116 04:06:15.937209  515760 main.go:141] libmachine: (calico-087557) DBG | About to run SSH command:
	I0116 04:06:15.937225  515760 main.go:141] libmachine: (calico-087557) DBG | exit 0
	I0116 04:06:16.028544  515760 main.go:141] libmachine: (calico-087557) DBG | SSH cmd err, output: <nil>: 
	I0116 04:06:16.028821  515760 main.go:141] libmachine: (calico-087557) KVM machine creation complete!
	I0116 04:06:16.029187  515760 main.go:141] libmachine: (calico-087557) Calling .GetConfigRaw
	I0116 04:06:16.029902  515760 main.go:141] libmachine: (calico-087557) Calling .DriverName
	I0116 04:06:16.030143  515760 main.go:141] libmachine: (calico-087557) Calling .DriverName
	I0116 04:06:16.030360  515760 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0116 04:06:16.030380  515760 main.go:141] libmachine: (calico-087557) Calling .GetState
	I0116 04:06:16.031693  515760 main.go:141] libmachine: Detecting operating system of created instance...
	I0116 04:06:16.031710  515760 main.go:141] libmachine: Waiting for SSH to be available...
	I0116 04:06:16.031717  515760 main.go:141] libmachine: Getting to WaitForSSH function...
	I0116 04:06:16.031726  515760 main.go:141] libmachine: (calico-087557) Calling .GetSSHHostname
	I0116 04:06:16.034327  515760 main.go:141] libmachine: (calico-087557) DBG | domain calico-087557 has defined MAC address 52:54:00:4e:80:99 in network mk-calico-087557
	I0116 04:06:16.034785  515760 main.go:141] libmachine: (calico-087557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:80:99", ip: ""} in network mk-calico-087557: {Iface:virbr3 ExpiryTime:2024-01-16 05:06:05 +0000 UTC Type:0 Mac:52:54:00:4e:80:99 Iaid: IPaddr:192.168.61.99 Prefix:24 Hostname:calico-087557 Clientid:01:52:54:00:4e:80:99}
	I0116 04:06:16.034858  515760 main.go:141] libmachine: (calico-087557) DBG | domain calico-087557 has defined IP address 192.168.61.99 and MAC address 52:54:00:4e:80:99 in network mk-calico-087557
	I0116 04:06:16.034981  515760 main.go:141] libmachine: (calico-087557) Calling .GetSSHPort
	I0116 04:06:16.035200  515760 main.go:141] libmachine: (calico-087557) Calling .GetSSHKeyPath
	I0116 04:06:16.035371  515760 main.go:141] libmachine: (calico-087557) Calling .GetSSHKeyPath
	I0116 04:06:16.035529  515760 main.go:141] libmachine: (calico-087557) Calling .GetSSHUsername
	I0116 04:06:16.035723  515760 main.go:141] libmachine: Using SSH client type: native
	I0116 04:06:16.036122  515760 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.99 22 <nil> <nil>}
	I0116 04:06:16.036138  515760 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0116 04:06:16.155909  515760 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 04:06:16.155940  515760 main.go:141] libmachine: Detecting the provisioner...
	I0116 04:06:16.155951  515760 main.go:141] libmachine: (calico-087557) Calling .GetSSHHostname
	I0116 04:06:16.159036  515760 main.go:141] libmachine: (calico-087557) DBG | domain calico-087557 has defined MAC address 52:54:00:4e:80:99 in network mk-calico-087557
	I0116 04:06:16.159442  515760 main.go:141] libmachine: (calico-087557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:80:99", ip: ""} in network mk-calico-087557: {Iface:virbr3 ExpiryTime:2024-01-16 05:06:05 +0000 UTC Type:0 Mac:52:54:00:4e:80:99 Iaid: IPaddr:192.168.61.99 Prefix:24 Hostname:calico-087557 Clientid:01:52:54:00:4e:80:99}
	I0116 04:06:16.159477  515760 main.go:141] libmachine: (calico-087557) DBG | domain calico-087557 has defined IP address 192.168.61.99 and MAC address 52:54:00:4e:80:99 in network mk-calico-087557
	I0116 04:06:16.159656  515760 main.go:141] libmachine: (calico-087557) Calling .GetSSHPort
	I0116 04:06:16.159901  515760 main.go:141] libmachine: (calico-087557) Calling .GetSSHKeyPath
	I0116 04:06:16.160105  515760 main.go:141] libmachine: (calico-087557) Calling .GetSSHKeyPath
	I0116 04:06:16.160291  515760 main.go:141] libmachine: (calico-087557) Calling .GetSSHUsername
	I0116 04:06:16.160479  515760 main.go:141] libmachine: Using SSH client type: native
	I0116 04:06:16.160837  515760 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.99 22 <nil> <nil>}
	I0116 04:06:16.160853  515760 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0116 04:06:16.285263  515760 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g19d536a-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0116 04:06:16.285390  515760 main.go:141] libmachine: found compatible host: buildroot
	I0116 04:06:16.285407  515760 main.go:141] libmachine: Provisioning with buildroot...
	I0116 04:06:16.285461  515760 main.go:141] libmachine: (calico-087557) Calling .GetMachineName
	I0116 04:06:16.285790  515760 buildroot.go:166] provisioning hostname "calico-087557"
	I0116 04:06:16.285824  515760 main.go:141] libmachine: (calico-087557) Calling .GetMachineName
	I0116 04:06:16.286026  515760 main.go:141] libmachine: (calico-087557) Calling .GetSSHHostname
	I0116 04:06:16.289308  515760 main.go:141] libmachine: (calico-087557) DBG | domain calico-087557 has defined MAC address 52:54:00:4e:80:99 in network mk-calico-087557
	I0116 04:06:16.289750  515760 main.go:141] libmachine: (calico-087557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:80:99", ip: ""} in network mk-calico-087557: {Iface:virbr3 ExpiryTime:2024-01-16 05:06:05 +0000 UTC Type:0 Mac:52:54:00:4e:80:99 Iaid: IPaddr:192.168.61.99 Prefix:24 Hostname:calico-087557 Clientid:01:52:54:00:4e:80:99}
	I0116 04:06:16.289783  515760 main.go:141] libmachine: (calico-087557) DBG | domain calico-087557 has defined IP address 192.168.61.99 and MAC address 52:54:00:4e:80:99 in network mk-calico-087557
	I0116 04:06:16.289953  515760 main.go:141] libmachine: (calico-087557) Calling .GetSSHPort
	I0116 04:06:16.290168  515760 main.go:141] libmachine: (calico-087557) Calling .GetSSHKeyPath
	I0116 04:06:16.290388  515760 main.go:141] libmachine: (calico-087557) Calling .GetSSHKeyPath
	I0116 04:06:16.290557  515760 main.go:141] libmachine: (calico-087557) Calling .GetSSHUsername
	I0116 04:06:16.290737  515760 main.go:141] libmachine: Using SSH client type: native
	I0116 04:06:16.291058  515760 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.99 22 <nil> <nil>}
	I0116 04:06:16.291072  515760 main.go:141] libmachine: About to run SSH command:
	sudo hostname calico-087557 && echo "calico-087557" | sudo tee /etc/hostname
	I0116 04:06:16.426397  515760 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-087557
	
	I0116 04:06:16.426448  515760 main.go:141] libmachine: (calico-087557) Calling .GetSSHHostname
	I0116 04:06:16.429555  515760 main.go:141] libmachine: (calico-087557) DBG | domain calico-087557 has defined MAC address 52:54:00:4e:80:99 in network mk-calico-087557
	I0116 04:06:16.429947  515760 main.go:141] libmachine: (calico-087557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:80:99", ip: ""} in network mk-calico-087557: {Iface:virbr3 ExpiryTime:2024-01-16 05:06:05 +0000 UTC Type:0 Mac:52:54:00:4e:80:99 Iaid: IPaddr:192.168.61.99 Prefix:24 Hostname:calico-087557 Clientid:01:52:54:00:4e:80:99}
	I0116 04:06:16.430006  515760 main.go:141] libmachine: (calico-087557) DBG | domain calico-087557 has defined IP address 192.168.61.99 and MAC address 52:54:00:4e:80:99 in network mk-calico-087557
	I0116 04:06:16.430149  515760 main.go:141] libmachine: (calico-087557) Calling .GetSSHPort
	I0116 04:06:16.430416  515760 main.go:141] libmachine: (calico-087557) Calling .GetSSHKeyPath
	I0116 04:06:16.430609  515760 main.go:141] libmachine: (calico-087557) Calling .GetSSHKeyPath
	I0116 04:06:16.430786  515760 main.go:141] libmachine: (calico-087557) Calling .GetSSHUsername
	I0116 04:06:16.430949  515760 main.go:141] libmachine: Using SSH client type: native
	I0116 04:06:16.431293  515760 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.99 22 <nil> <nil>}
	I0116 04:06:16.431318  515760 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-087557' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-087557/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-087557' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 04:06:16.564842  515760 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 04:06:16.564890  515760 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17965-468241/.minikube CaCertPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17965-468241/.minikube}
	I0116 04:06:16.564932  515760 buildroot.go:174] setting up certificates
	I0116 04:06:16.564944  515760 provision.go:83] configureAuth start
	I0116 04:06:16.564960  515760 main.go:141] libmachine: (calico-087557) Calling .GetMachineName
	I0116 04:06:16.565294  515760 main.go:141] libmachine: (calico-087557) Calling .GetIP
	I0116 04:06:16.568212  515760 main.go:141] libmachine: (calico-087557) DBG | domain calico-087557 has defined MAC address 52:54:00:4e:80:99 in network mk-calico-087557
	I0116 04:06:16.568583  515760 main.go:141] libmachine: (calico-087557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:80:99", ip: ""} in network mk-calico-087557: {Iface:virbr3 ExpiryTime:2024-01-16 05:06:05 +0000 UTC Type:0 Mac:52:54:00:4e:80:99 Iaid: IPaddr:192.168.61.99 Prefix:24 Hostname:calico-087557 Clientid:01:52:54:00:4e:80:99}
	I0116 04:06:16.568611  515760 main.go:141] libmachine: (calico-087557) DBG | domain calico-087557 has defined IP address 192.168.61.99 and MAC address 52:54:00:4e:80:99 in network mk-calico-087557
	I0116 04:06:16.568797  515760 main.go:141] libmachine: (calico-087557) Calling .GetSSHHostname
	I0116 04:06:16.571440  515760 main.go:141] libmachine: (calico-087557) DBG | domain calico-087557 has defined MAC address 52:54:00:4e:80:99 in network mk-calico-087557
	I0116 04:06:16.571753  515760 main.go:141] libmachine: (calico-087557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:80:99", ip: ""} in network mk-calico-087557: {Iface:virbr3 ExpiryTime:2024-01-16 05:06:05 +0000 UTC Type:0 Mac:52:54:00:4e:80:99 Iaid: IPaddr:192.168.61.99 Prefix:24 Hostname:calico-087557 Clientid:01:52:54:00:4e:80:99}
	I0116 04:06:16.571784  515760 main.go:141] libmachine: (calico-087557) DBG | domain calico-087557 has defined IP address 192.168.61.99 and MAC address 52:54:00:4e:80:99 in network mk-calico-087557
	I0116 04:06:16.572042  515760 provision.go:138] copyHostCerts
	I0116 04:06:16.572112  515760 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem, removing ...
	I0116 04:06:16.572127  515760 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem
	I0116 04:06:16.572191  515760 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem (1078 bytes)
	I0116 04:06:16.572310  515760 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem, removing ...
	I0116 04:06:16.572322  515760 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem
	I0116 04:06:16.572348  515760 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem (1123 bytes)
	I0116 04:06:16.572403  515760 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem, removing ...
	I0116 04:06:16.572414  515760 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem
	I0116 04:06:16.572433  515760 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem (1679 bytes)
	I0116 04:06:16.572484  515760 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca-key.pem org=jenkins.calico-087557 san=[192.168.61.99 192.168.61.99 localhost 127.0.0.1 minikube calico-087557]
	I0116 04:06:16.849014  515760 provision.go:172] copyRemoteCerts
	I0116 04:06:16.849084  515760 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 04:06:16.849119  515760 main.go:141] libmachine: (calico-087557) Calling .GetSSHHostname
	I0116 04:06:16.852057  515760 main.go:141] libmachine: (calico-087557) DBG | domain calico-087557 has defined MAC address 52:54:00:4e:80:99 in network mk-calico-087557
	I0116 04:06:16.852382  515760 main.go:141] libmachine: (calico-087557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:80:99", ip: ""} in network mk-calico-087557: {Iface:virbr3 ExpiryTime:2024-01-16 05:06:05 +0000 UTC Type:0 Mac:52:54:00:4e:80:99 Iaid: IPaddr:192.168.61.99 Prefix:24 Hostname:calico-087557 Clientid:01:52:54:00:4e:80:99}
	I0116 04:06:16.852406  515760 main.go:141] libmachine: (calico-087557) DBG | domain calico-087557 has defined IP address 192.168.61.99 and MAC address 52:54:00:4e:80:99 in network mk-calico-087557
	I0116 04:06:16.852631  515760 main.go:141] libmachine: (calico-087557) Calling .GetSSHPort
	I0116 04:06:16.852877  515760 main.go:141] libmachine: (calico-087557) Calling .GetSSHKeyPath
	I0116 04:06:16.853094  515760 main.go:141] libmachine: (calico-087557) Calling .GetSSHUsername
	I0116 04:06:16.853294  515760 sshutil.go:53] new ssh client: &{IP:192.168.61.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/calico-087557/id_rsa Username:docker}
	I0116 04:06:16.941650  515760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0116 04:06:16.967647  515760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0116 04:06:16.992685  515760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0116 04:06:17.018163  515760 provision.go:86] duration metric: configureAuth took 453.199067ms
	I0116 04:06:17.018198  515760 buildroot.go:189] setting minikube options for container-runtime
	I0116 04:06:17.018392  515760 config.go:182] Loaded profile config "calico-087557": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 04:06:17.018495  515760 main.go:141] libmachine: (calico-087557) Calling .GetSSHHostname
	I0116 04:06:17.021374  515760 main.go:141] libmachine: (calico-087557) DBG | domain calico-087557 has defined MAC address 52:54:00:4e:80:99 in network mk-calico-087557
	I0116 04:06:17.021747  515760 main.go:141] libmachine: (calico-087557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:80:99", ip: ""} in network mk-calico-087557: {Iface:virbr3 ExpiryTime:2024-01-16 05:06:05 +0000 UTC Type:0 Mac:52:54:00:4e:80:99 Iaid: IPaddr:192.168.61.99 Prefix:24 Hostname:calico-087557 Clientid:01:52:54:00:4e:80:99}
	I0116 04:06:17.021782  515760 main.go:141] libmachine: (calico-087557) DBG | domain calico-087557 has defined IP address 192.168.61.99 and MAC address 52:54:00:4e:80:99 in network mk-calico-087557
	I0116 04:06:17.021948  515760 main.go:141] libmachine: (calico-087557) Calling .GetSSHPort
	I0116 04:06:17.022221  515760 main.go:141] libmachine: (calico-087557) Calling .GetSSHKeyPath
	I0116 04:06:17.022414  515760 main.go:141] libmachine: (calico-087557) Calling .GetSSHKeyPath
	I0116 04:06:17.022583  515760 main.go:141] libmachine: (calico-087557) Calling .GetSSHUsername
	I0116 04:06:17.022777  515760 main.go:141] libmachine: Using SSH client type: native
	I0116 04:06:17.023117  515760 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.99 22 <nil> <nil>}
	I0116 04:06:17.023139  515760 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 04:06:17.381670  515760 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 04:06:17.381696  515760 main.go:141] libmachine: Checking connection to Docker...
	I0116 04:06:17.381705  515760 main.go:141] libmachine: (calico-087557) Calling .GetURL
	I0116 04:06:17.383246  515760 main.go:141] libmachine: (calico-087557) DBG | Using libvirt version 6000000
	I0116 04:06:17.385744  515760 main.go:141] libmachine: (calico-087557) DBG | domain calico-087557 has defined MAC address 52:54:00:4e:80:99 in network mk-calico-087557
	I0116 04:06:17.386140  515760 main.go:141] libmachine: (calico-087557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:80:99", ip: ""} in network mk-calico-087557: {Iface:virbr3 ExpiryTime:2024-01-16 05:06:05 +0000 UTC Type:0 Mac:52:54:00:4e:80:99 Iaid: IPaddr:192.168.61.99 Prefix:24 Hostname:calico-087557 Clientid:01:52:54:00:4e:80:99}
	I0116 04:06:17.386176  515760 main.go:141] libmachine: (calico-087557) DBG | domain calico-087557 has defined IP address 192.168.61.99 and MAC address 52:54:00:4e:80:99 in network mk-calico-087557
	I0116 04:06:17.386336  515760 main.go:141] libmachine: Docker is up and running!
	I0116 04:06:17.386358  515760 main.go:141] libmachine: Reticulating splines...
	I0116 04:06:17.386369  515760 client.go:171] LocalClient.Create took 27.697335966s
	I0116 04:06:17.386399  515760 start.go:167] duration metric: libmachine.API.Create for "calico-087557" took 27.697410149s
	I0116 04:06:17.386411  515760 start.go:300] post-start starting for "calico-087557" (driver="kvm2")
	I0116 04:06:17.386427  515760 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 04:06:17.386464  515760 main.go:141] libmachine: (calico-087557) Calling .DriverName
	I0116 04:06:17.386753  515760 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 04:06:17.386778  515760 main.go:141] libmachine: (calico-087557) Calling .GetSSHHostname
	I0116 04:06:17.389190  515760 main.go:141] libmachine: (calico-087557) DBG | domain calico-087557 has defined MAC address 52:54:00:4e:80:99 in network mk-calico-087557
	I0116 04:06:17.389602  515760 main.go:141] libmachine: (calico-087557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:80:99", ip: ""} in network mk-calico-087557: {Iface:virbr3 ExpiryTime:2024-01-16 05:06:05 +0000 UTC Type:0 Mac:52:54:00:4e:80:99 Iaid: IPaddr:192.168.61.99 Prefix:24 Hostname:calico-087557 Clientid:01:52:54:00:4e:80:99}
	I0116 04:06:17.389633  515760 main.go:141] libmachine: (calico-087557) DBG | domain calico-087557 has defined IP address 192.168.61.99 and MAC address 52:54:00:4e:80:99 in network mk-calico-087557
	I0116 04:06:17.389785  515760 main.go:141] libmachine: (calico-087557) Calling .GetSSHPort
	I0116 04:06:17.390007  515760 main.go:141] libmachine: (calico-087557) Calling .GetSSHKeyPath
	I0116 04:06:17.390174  515760 main.go:141] libmachine: (calico-087557) Calling .GetSSHUsername
	I0116 04:06:17.390329  515760 sshutil.go:53] new ssh client: &{IP:192.168.61.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/calico-087557/id_rsa Username:docker}
	I0116 04:06:17.483954  515760 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 04:06:17.489126  515760 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 04:06:17.489154  515760 filesync.go:126] Scanning /home/jenkins/minikube-integration/17965-468241/.minikube/addons for local assets ...
	I0116 04:06:17.489234  515760 filesync.go:126] Scanning /home/jenkins/minikube-integration/17965-468241/.minikube/files for local assets ...
	I0116 04:06:17.489333  515760 filesync.go:149] local asset: /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem -> 4754782.pem in /etc/ssl/certs
	I0116 04:06:17.489445  515760 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 04:06:17.500588  515760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem --> /etc/ssl/certs/4754782.pem (1708 bytes)
	I0116 04:06:17.526528  515760 start.go:303] post-start completed in 140.100543ms
	I0116 04:06:17.526585  515760 main.go:141] libmachine: (calico-087557) Calling .GetConfigRaw
	I0116 04:06:17.527209  515760 main.go:141] libmachine: (calico-087557) Calling .GetIP
	I0116 04:06:17.530049  515760 main.go:141] libmachine: (calico-087557) DBG | domain calico-087557 has defined MAC address 52:54:00:4e:80:99 in network mk-calico-087557
	I0116 04:06:17.530520  515760 main.go:141] libmachine: (calico-087557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:80:99", ip: ""} in network mk-calico-087557: {Iface:virbr3 ExpiryTime:2024-01-16 05:06:05 +0000 UTC Type:0 Mac:52:54:00:4e:80:99 Iaid: IPaddr:192.168.61.99 Prefix:24 Hostname:calico-087557 Clientid:01:52:54:00:4e:80:99}
	I0116 04:06:17.530557  515760 main.go:141] libmachine: (calico-087557) DBG | domain calico-087557 has defined IP address 192.168.61.99 and MAC address 52:54:00:4e:80:99 in network mk-calico-087557
	I0116 04:06:17.530815  515760 profile.go:148] Saving config to /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/calico-087557/config.json ...
	I0116 04:06:17.531008  515760 start.go:128] duration metric: createHost completed in 27.86423051s
	I0116 04:06:17.531036  515760 main.go:141] libmachine: (calico-087557) Calling .GetSSHHostname
	I0116 04:06:17.533512  515760 main.go:141] libmachine: (calico-087557) DBG | domain calico-087557 has defined MAC address 52:54:00:4e:80:99 in network mk-calico-087557
	I0116 04:06:17.533866  515760 main.go:141] libmachine: (calico-087557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:80:99", ip: ""} in network mk-calico-087557: {Iface:virbr3 ExpiryTime:2024-01-16 05:06:05 +0000 UTC Type:0 Mac:52:54:00:4e:80:99 Iaid: IPaddr:192.168.61.99 Prefix:24 Hostname:calico-087557 Clientid:01:52:54:00:4e:80:99}
	I0116 04:06:17.533893  515760 main.go:141] libmachine: (calico-087557) DBG | domain calico-087557 has defined IP address 192.168.61.99 and MAC address 52:54:00:4e:80:99 in network mk-calico-087557
	I0116 04:06:17.534106  515760 main.go:141] libmachine: (calico-087557) Calling .GetSSHPort
	I0116 04:06:17.534327  515760 main.go:141] libmachine: (calico-087557) Calling .GetSSHKeyPath
	I0116 04:06:17.534506  515760 main.go:141] libmachine: (calico-087557) Calling .GetSSHKeyPath
	I0116 04:06:17.534699  515760 main.go:141] libmachine: (calico-087557) Calling .GetSSHUsername
	I0116 04:06:17.534886  515760 main.go:141] libmachine: Using SSH client type: native
	I0116 04:06:17.535220  515760 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.99 22 <nil> <nil>}
	I0116 04:06:17.535232  515760 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0116 04:06:17.661621  515760 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705377977.648259111
	
	I0116 04:06:17.661648  515760 fix.go:206] guest clock: 1705377977.648259111
	I0116 04:06:17.661659  515760 fix.go:219] Guest: 2024-01-16 04:06:17.648259111 +0000 UTC Remote: 2024-01-16 04:06:17.531021385 +0000 UTC m=+28.010432069 (delta=117.237726ms)
	I0116 04:06:17.661704  515760 fix.go:190] guest clock delta is within tolerance: 117.237726ms
	I0116 04:06:17.661710  515760 start.go:83] releasing machines lock for "calico-087557", held for 27.99507423s
	I0116 04:06:17.661740  515760 main.go:141] libmachine: (calico-087557) Calling .DriverName
	I0116 04:06:17.662061  515760 main.go:141] libmachine: (calico-087557) Calling .GetIP
	I0116 04:06:17.665226  515760 main.go:141] libmachine: (calico-087557) DBG | domain calico-087557 has defined MAC address 52:54:00:4e:80:99 in network mk-calico-087557
	I0116 04:06:17.665595  515760 main.go:141] libmachine: (calico-087557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:80:99", ip: ""} in network mk-calico-087557: {Iface:virbr3 ExpiryTime:2024-01-16 05:06:05 +0000 UTC Type:0 Mac:52:54:00:4e:80:99 Iaid: IPaddr:192.168.61.99 Prefix:24 Hostname:calico-087557 Clientid:01:52:54:00:4e:80:99}
	I0116 04:06:17.665629  515760 main.go:141] libmachine: (calico-087557) DBG | domain calico-087557 has defined IP address 192.168.61.99 and MAC address 52:54:00:4e:80:99 in network mk-calico-087557
	I0116 04:06:17.665818  515760 main.go:141] libmachine: (calico-087557) Calling .DriverName
	I0116 04:06:17.666410  515760 main.go:141] libmachine: (calico-087557) Calling .DriverName
	I0116 04:06:17.666639  515760 main.go:141] libmachine: (calico-087557) Calling .DriverName
	I0116 04:06:17.666744  515760 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 04:06:17.666787  515760 main.go:141] libmachine: (calico-087557) Calling .GetSSHHostname
	I0116 04:06:17.666898  515760 ssh_runner.go:195] Run: cat /version.json
	I0116 04:06:17.666932  515760 main.go:141] libmachine: (calico-087557) Calling .GetSSHHostname
	I0116 04:06:17.669656  515760 main.go:141] libmachine: (calico-087557) DBG | domain calico-087557 has defined MAC address 52:54:00:4e:80:99 in network mk-calico-087557
	I0116 04:06:17.669847  515760 main.go:141] libmachine: (calico-087557) DBG | domain calico-087557 has defined MAC address 52:54:00:4e:80:99 in network mk-calico-087557
	I0116 04:06:17.670097  515760 main.go:141] libmachine: (calico-087557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:80:99", ip: ""} in network mk-calico-087557: {Iface:virbr3 ExpiryTime:2024-01-16 05:06:05 +0000 UTC Type:0 Mac:52:54:00:4e:80:99 Iaid: IPaddr:192.168.61.99 Prefix:24 Hostname:calico-087557 Clientid:01:52:54:00:4e:80:99}
	I0116 04:06:17.670161  515760 main.go:141] libmachine: (calico-087557) DBG | domain calico-087557 has defined IP address 192.168.61.99 and MAC address 52:54:00:4e:80:99 in network mk-calico-087557
	I0116 04:06:17.670295  515760 main.go:141] libmachine: (calico-087557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:80:99", ip: ""} in network mk-calico-087557: {Iface:virbr3 ExpiryTime:2024-01-16 05:06:05 +0000 UTC Type:0 Mac:52:54:00:4e:80:99 Iaid: IPaddr:192.168.61.99 Prefix:24 Hostname:calico-087557 Clientid:01:52:54:00:4e:80:99}
	I0116 04:06:17.670327  515760 main.go:141] libmachine: (calico-087557) DBG | domain calico-087557 has defined IP address 192.168.61.99 and MAC address 52:54:00:4e:80:99 in network mk-calico-087557
	I0116 04:06:17.670471  515760 main.go:141] libmachine: (calico-087557) Calling .GetSSHPort
	I0116 04:06:17.670489  515760 main.go:141] libmachine: (calico-087557) Calling .GetSSHPort
	I0116 04:06:17.670681  515760 main.go:141] libmachine: (calico-087557) Calling .GetSSHKeyPath
	I0116 04:06:17.670769  515760 main.go:141] libmachine: (calico-087557) Calling .GetSSHKeyPath
	I0116 04:06:17.670866  515760 main.go:141] libmachine: (calico-087557) Calling .GetSSHUsername
	I0116 04:06:17.670960  515760 main.go:141] libmachine: (calico-087557) Calling .GetSSHUsername
	I0116 04:06:17.671044  515760 sshutil.go:53] new ssh client: &{IP:192.168.61.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/calico-087557/id_rsa Username:docker}
	I0116 04:06:17.671099  515760 sshutil.go:53] new ssh client: &{IP:192.168.61.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/calico-087557/id_rsa Username:docker}
	I0116 04:06:17.789072  515760 ssh_runner.go:195] Run: systemctl --version
	I0116 04:06:17.796605  515760 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 04:06:17.967432  515760 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0116 04:06:17.974335  515760 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 04:06:17.974424  515760 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 04:06:17.993460  515760 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0116 04:06:17.993496  515760 start.go:475] detecting cgroup driver to use...
	I0116 04:06:17.993584  515760 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 04:06:18.010156  515760 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 04:06:18.029845  515760 docker.go:217] disabling cri-docker service (if available) ...
	I0116 04:06:18.029908  515760 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 04:06:18.047222  515760 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 04:06:18.064845  515760 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 04:06:18.187633  515760 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 04:06:18.331611  515760 docker.go:233] disabling docker service ...
	I0116 04:06:18.331704  515760 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 04:06:18.347119  515760 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 04:06:18.363997  515760 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 04:06:18.485430  515760 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 04:06:18.616952  515760 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 04:06:18.631120  515760 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 04:06:18.649031  515760 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0116 04:06:18.649122  515760 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 04:06:18.663255  515760 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 04:06:18.663372  515760 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 04:06:18.677949  515760 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 04:06:18.689631  515760 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 04:06:18.700990  515760 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 04:06:18.712331  515760 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 04:06:18.725052  515760 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0116 04:06:18.725137  515760 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0116 04:06:18.743276  515760 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 04:06:18.754633  515760 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 04:06:18.906880  515760 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 04:06:19.106351  515760 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 04:06:19.106446  515760 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 04:06:19.116612  515760 start.go:543] Will wait 60s for crictl version
	I0116 04:06:19.116701  515760 ssh_runner.go:195] Run: which crictl
	I0116 04:06:19.124232  515760 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 04:06:19.162664  515760 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0116 04:06:19.162761  515760 ssh_runner.go:195] Run: crio --version
	I0116 04:06:19.224025  515760 ssh_runner.go:195] Run: crio --version
	I0116 04:06:19.282429  515760 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0116 04:06:19.284076  515760 main.go:141] libmachine: (calico-087557) Calling .GetIP
	I0116 04:06:19.287103  515760 main.go:141] libmachine: (calico-087557) DBG | domain calico-087557 has defined MAC address 52:54:00:4e:80:99 in network mk-calico-087557
	I0116 04:06:19.287499  515760 main.go:141] libmachine: (calico-087557) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:80:99", ip: ""} in network mk-calico-087557: {Iface:virbr3 ExpiryTime:2024-01-16 05:06:05 +0000 UTC Type:0 Mac:52:54:00:4e:80:99 Iaid: IPaddr:192.168.61.99 Prefix:24 Hostname:calico-087557 Clientid:01:52:54:00:4e:80:99}
	I0116 04:06:19.287532  515760 main.go:141] libmachine: (calico-087557) DBG | domain calico-087557 has defined IP address 192.168.61.99 and MAC address 52:54:00:4e:80:99 in network mk-calico-087557
	I0116 04:06:19.287738  515760 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0116 04:06:19.293419  515760 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 04:06:19.309438  515760 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 04:06:19.309493  515760 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 04:06:19.355654  515760 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0116 04:06:19.355741  515760 ssh_runner.go:195] Run: which lz4
	I0116 04:06:19.361181  515760 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0116 04:06:19.365934  515760 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0116 04:06:19.365978  515760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0116 04:06:17.664485  516047 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0116 04:06:17.664723  516047 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 04:06:17.664797  516047 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 04:06:17.686563  516047 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44385
	I0116 04:06:17.687393  516047 main.go:141] libmachine: () Calling .GetVersion
	I0116 04:06:17.688400  516047 main.go:141] libmachine: Using API Version  1
	I0116 04:06:17.688423  516047 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 04:06:17.689261  516047 main.go:141] libmachine: () Calling .GetMachineName
	I0116 04:06:17.689475  516047 main.go:141] libmachine: (custom-flannel-087557) Calling .GetMachineName
	I0116 04:06:17.689666  516047 main.go:141] libmachine: (custom-flannel-087557) Calling .DriverName
	I0116 04:06:17.689856  516047 start.go:159] libmachine.API.Create for "custom-flannel-087557" (driver="kvm2")
	I0116 04:06:17.689887  516047 client.go:168] LocalClient.Create starting
	I0116 04:06:17.689949  516047 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem
	I0116 04:06:17.689999  516047 main.go:141] libmachine: Decoding PEM data...
	I0116 04:06:17.690022  516047 main.go:141] libmachine: Parsing certificate...
	I0116 04:06:17.690093  516047 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem
	I0116 04:06:17.690120  516047 main.go:141] libmachine: Decoding PEM data...
	I0116 04:06:17.690139  516047 main.go:141] libmachine: Parsing certificate...
	I0116 04:06:17.690165  516047 main.go:141] libmachine: Running pre-create checks...
	I0116 04:06:17.690176  516047 main.go:141] libmachine: (custom-flannel-087557) Calling .PreCreateCheck
	I0116 04:06:17.690621  516047 main.go:141] libmachine: (custom-flannel-087557) Calling .GetConfigRaw
	I0116 04:06:17.691059  516047 main.go:141] libmachine: Creating machine...
	I0116 04:06:17.691076  516047 main.go:141] libmachine: (custom-flannel-087557) Calling .Create
	I0116 04:06:17.691253  516047 main.go:141] libmachine: (custom-flannel-087557) Creating KVM machine...
	I0116 04:06:17.692606  516047 main.go:141] libmachine: (custom-flannel-087557) DBG | found existing default KVM network
	I0116 04:06:17.694392  516047 main.go:141] libmachine: (custom-flannel-087557) DBG | I0116 04:06:17.694203  517501 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015f70}
	I0116 04:06:17.700954  516047 main.go:141] libmachine: (custom-flannel-087557) DBG | trying to create private KVM network mk-custom-flannel-087557 192.168.39.0/24...
	I0116 04:06:17.785852  516047 main.go:141] libmachine: (custom-flannel-087557) DBG | private KVM network mk-custom-flannel-087557 192.168.39.0/24 created
	I0116 04:06:17.785913  516047 main.go:141] libmachine: (custom-flannel-087557) DBG | I0116 04:06:17.785839  517501 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17965-468241/.minikube
	I0116 04:06:17.785936  516047 main.go:141] libmachine: (custom-flannel-087557) Setting up store path in /home/jenkins/minikube-integration/17965-468241/.minikube/machines/custom-flannel-087557 ...
	I0116 04:06:17.785973  516047 main.go:141] libmachine: (custom-flannel-087557) Building disk image from file:///home/jenkins/minikube-integration/17965-468241/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso
	I0116 04:06:17.786002  516047 main.go:141] libmachine: (custom-flannel-087557) Downloading /home/jenkins/minikube-integration/17965-468241/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17965-468241/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso...
	I0116 04:06:18.029333  516047 main.go:141] libmachine: (custom-flannel-087557) DBG | I0116 04:06:18.029197  517501 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17965-468241/.minikube/machines/custom-flannel-087557/id_rsa...
	I0116 04:06:18.212961  516047 main.go:141] libmachine: (custom-flannel-087557) DBG | I0116 04:06:18.212827  517501 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17965-468241/.minikube/machines/custom-flannel-087557/custom-flannel-087557.rawdisk...
	I0116 04:06:18.213001  516047 main.go:141] libmachine: (custom-flannel-087557) DBG | Writing magic tar header
	I0116 04:06:18.213027  516047 main.go:141] libmachine: (custom-flannel-087557) DBG | Writing SSH key tar header
	I0116 04:06:18.213049  516047 main.go:141] libmachine: (custom-flannel-087557) DBG | I0116 04:06:18.212954  517501 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17965-468241/.minikube/machines/custom-flannel-087557 ...
	I0116 04:06:18.213131  516047 main.go:141] libmachine: (custom-flannel-087557) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17965-468241/.minikube/machines/custom-flannel-087557
	I0116 04:06:18.213184  516047 main.go:141] libmachine: (custom-flannel-087557) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17965-468241/.minikube/machines
	I0116 04:06:18.213208  516047 main.go:141] libmachine: (custom-flannel-087557) Setting executable bit set on /home/jenkins/minikube-integration/17965-468241/.minikube/machines/custom-flannel-087557 (perms=drwx------)
	I0116 04:06:18.213227  516047 main.go:141] libmachine: (custom-flannel-087557) Setting executable bit set on /home/jenkins/minikube-integration/17965-468241/.minikube/machines (perms=drwxr-xr-x)
	I0116 04:06:18.213234  516047 main.go:141] libmachine: (custom-flannel-087557) Setting executable bit set on /home/jenkins/minikube-integration/17965-468241/.minikube (perms=drwxr-xr-x)
	I0116 04:06:18.213243  516047 main.go:141] libmachine: (custom-flannel-087557) Setting executable bit set on /home/jenkins/minikube-integration/17965-468241 (perms=drwxrwxr-x)
	I0116 04:06:18.213251  516047 main.go:141] libmachine: (custom-flannel-087557) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0116 04:06:18.213267  516047 main.go:141] libmachine: (custom-flannel-087557) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0116 04:06:18.213282  516047 main.go:141] libmachine: (custom-flannel-087557) Creating domain...
	I0116 04:06:18.213294  516047 main.go:141] libmachine: (custom-flannel-087557) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17965-468241/.minikube
	I0116 04:06:18.213313  516047 main.go:141] libmachine: (custom-flannel-087557) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17965-468241
	I0116 04:06:18.213323  516047 main.go:141] libmachine: (custom-flannel-087557) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0116 04:06:18.213330  516047 main.go:141] libmachine: (custom-flannel-087557) DBG | Checking permissions on dir: /home/jenkins
	I0116 04:06:18.213339  516047 main.go:141] libmachine: (custom-flannel-087557) DBG | Checking permissions on dir: /home
	I0116 04:06:18.213348  516047 main.go:141] libmachine: (custom-flannel-087557) DBG | Skipping /home - not owner
	I0116 04:06:18.214399  516047 main.go:141] libmachine: (custom-flannel-087557) define libvirt domain using xml: 
	I0116 04:06:18.214437  516047 main.go:141] libmachine: (custom-flannel-087557) <domain type='kvm'>
	I0116 04:06:18.214449  516047 main.go:141] libmachine: (custom-flannel-087557)   <name>custom-flannel-087557</name>
	I0116 04:06:18.214459  516047 main.go:141] libmachine: (custom-flannel-087557)   <memory unit='MiB'>3072</memory>
	I0116 04:06:18.214470  516047 main.go:141] libmachine: (custom-flannel-087557)   <vcpu>2</vcpu>
	I0116 04:06:18.214485  516047 main.go:141] libmachine: (custom-flannel-087557)   <features>
	I0116 04:06:18.214499  516047 main.go:141] libmachine: (custom-flannel-087557)     <acpi/>
	I0116 04:06:18.214510  516047 main.go:141] libmachine: (custom-flannel-087557)     <apic/>
	I0116 04:06:18.214536  516047 main.go:141] libmachine: (custom-flannel-087557)     <pae/>
	I0116 04:06:18.214567  516047 main.go:141] libmachine: (custom-flannel-087557)     
	I0116 04:06:18.214583  516047 main.go:141] libmachine: (custom-flannel-087557)   </features>
	I0116 04:06:18.214598  516047 main.go:141] libmachine: (custom-flannel-087557)   <cpu mode='host-passthrough'>
	I0116 04:06:18.214634  516047 main.go:141] libmachine: (custom-flannel-087557)   
	I0116 04:06:18.214661  516047 main.go:141] libmachine: (custom-flannel-087557)   </cpu>
	I0116 04:06:18.214674  516047 main.go:141] libmachine: (custom-flannel-087557)   <os>
	I0116 04:06:18.214692  516047 main.go:141] libmachine: (custom-flannel-087557)     <type>hvm</type>
	I0116 04:06:18.214730  516047 main.go:141] libmachine: (custom-flannel-087557)     <boot dev='cdrom'/>
	I0116 04:06:18.214760  516047 main.go:141] libmachine: (custom-flannel-087557)     <boot dev='hd'/>
	I0116 04:06:18.214773  516047 main.go:141] libmachine: (custom-flannel-087557)     <bootmenu enable='no'/>
	I0116 04:06:18.214784  516047 main.go:141] libmachine: (custom-flannel-087557)   </os>
	I0116 04:06:18.214799  516047 main.go:141] libmachine: (custom-flannel-087557)   <devices>
	I0116 04:06:18.214813  516047 main.go:141] libmachine: (custom-flannel-087557)     <disk type='file' device='cdrom'>
	I0116 04:06:18.214832  516047 main.go:141] libmachine: (custom-flannel-087557)       <source file='/home/jenkins/minikube-integration/17965-468241/.minikube/machines/custom-flannel-087557/boot2docker.iso'/>
	I0116 04:06:18.214848  516047 main.go:141] libmachine: (custom-flannel-087557)       <target dev='hdc' bus='scsi'/>
	I0116 04:06:18.214856  516047 main.go:141] libmachine: (custom-flannel-087557)       <readonly/>
	I0116 04:06:18.214878  516047 main.go:141] libmachine: (custom-flannel-087557)     </disk>
	I0116 04:06:18.214894  516047 main.go:141] libmachine: (custom-flannel-087557)     <disk type='file' device='disk'>
	I0116 04:06:18.214904  516047 main.go:141] libmachine: (custom-flannel-087557)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0116 04:06:18.214915  516047 main.go:141] libmachine: (custom-flannel-087557)       <source file='/home/jenkins/minikube-integration/17965-468241/.minikube/machines/custom-flannel-087557/custom-flannel-087557.rawdisk'/>
	I0116 04:06:18.214926  516047 main.go:141] libmachine: (custom-flannel-087557)       <target dev='hda' bus='virtio'/>
	I0116 04:06:18.214951  516047 main.go:141] libmachine: (custom-flannel-087557)     </disk>
	I0116 04:06:18.214967  516047 main.go:141] libmachine: (custom-flannel-087557)     <interface type='network'>
	I0116 04:06:18.214982  516047 main.go:141] libmachine: (custom-flannel-087557)       <source network='mk-custom-flannel-087557'/>
	I0116 04:06:18.214996  516047 main.go:141] libmachine: (custom-flannel-087557)       <model type='virtio'/>
	I0116 04:06:18.215010  516047 main.go:141] libmachine: (custom-flannel-087557)     </interface>
	I0116 04:06:18.215023  516047 main.go:141] libmachine: (custom-flannel-087557)     <interface type='network'>
	I0116 04:06:18.215050  516047 main.go:141] libmachine: (custom-flannel-087557)       <source network='default'/>
	I0116 04:06:18.215079  516047 main.go:141] libmachine: (custom-flannel-087557)       <model type='virtio'/>
	I0116 04:06:18.215094  516047 main.go:141] libmachine: (custom-flannel-087557)     </interface>
	I0116 04:06:18.215104  516047 main.go:141] libmachine: (custom-flannel-087557)     <serial type='pty'>
	I0116 04:06:18.215119  516047 main.go:141] libmachine: (custom-flannel-087557)       <target port='0'/>
	I0116 04:06:18.215132  516047 main.go:141] libmachine: (custom-flannel-087557)     </serial>
	I0116 04:06:18.215157  516047 main.go:141] libmachine: (custom-flannel-087557)     <console type='pty'>
	I0116 04:06:18.215179  516047 main.go:141] libmachine: (custom-flannel-087557)       <target type='serial' port='0'/>
	I0116 04:06:18.215204  516047 main.go:141] libmachine: (custom-flannel-087557)     </console>
	I0116 04:06:18.215221  516047 main.go:141] libmachine: (custom-flannel-087557)     <rng model='virtio'>
	I0116 04:06:18.215235  516047 main.go:141] libmachine: (custom-flannel-087557)       <backend model='random'>/dev/random</backend>
	I0116 04:06:18.215248  516047 main.go:141] libmachine: (custom-flannel-087557)     </rng>
	I0116 04:06:18.215265  516047 main.go:141] libmachine: (custom-flannel-087557)     
	I0116 04:06:18.215279  516047 main.go:141] libmachine: (custom-flannel-087557)     
	I0116 04:06:18.215302  516047 main.go:141] libmachine: (custom-flannel-087557)   </devices>
	I0116 04:06:18.215314  516047 main.go:141] libmachine: (custom-flannel-087557) </domain>
	I0116 04:06:18.215330  516047 main.go:141] libmachine: (custom-flannel-087557) 
	I0116 04:06:18.219733  516047 main.go:141] libmachine: (custom-flannel-087557) DBG | domain custom-flannel-087557 has defined MAC address 52:54:00:25:3f:3a in network default
	I0116 04:06:18.220640  516047 main.go:141] libmachine: (custom-flannel-087557) Ensuring networks are active...
	I0116 04:06:18.220674  516047 main.go:141] libmachine: (custom-flannel-087557) DBG | domain custom-flannel-087557 has defined MAC address 52:54:00:fc:19:0e in network mk-custom-flannel-087557
	I0116 04:06:18.221412  516047 main.go:141] libmachine: (custom-flannel-087557) Ensuring network default is active
	I0116 04:06:18.221838  516047 main.go:141] libmachine: (custom-flannel-087557) Ensuring network mk-custom-flannel-087557 is active
	I0116 04:06:18.222608  516047 main.go:141] libmachine: (custom-flannel-087557) Getting domain xml...
	I0116 04:06:18.223503  516047 main.go:141] libmachine: (custom-flannel-087557) Creating domain...
	I0116 04:06:18.586954  516047 main.go:141] libmachine: (custom-flannel-087557) Waiting to get IP...
	I0116 04:06:18.587778  516047 main.go:141] libmachine: (custom-flannel-087557) DBG | domain custom-flannel-087557 has defined MAC address 52:54:00:fc:19:0e in network mk-custom-flannel-087557
	I0116 04:06:18.588290  516047 main.go:141] libmachine: (custom-flannel-087557) DBG | unable to find current IP address of domain custom-flannel-087557 in network mk-custom-flannel-087557
	I0116 04:06:18.588322  516047 main.go:141] libmachine: (custom-flannel-087557) DBG | I0116 04:06:18.588248  517501 retry.go:31] will retry after 239.972582ms: waiting for machine to come up
	I0116 04:06:18.829968  516047 main.go:141] libmachine: (custom-flannel-087557) DBG | domain custom-flannel-087557 has defined MAC address 52:54:00:fc:19:0e in network mk-custom-flannel-087557
	I0116 04:06:18.830538  516047 main.go:141] libmachine: (custom-flannel-087557) DBG | unable to find current IP address of domain custom-flannel-087557 in network mk-custom-flannel-087557
	I0116 04:06:18.830568  516047 main.go:141] libmachine: (custom-flannel-087557) DBG | I0116 04:06:18.830495  517501 retry.go:31] will retry after 268.778493ms: waiting for machine to come up
	I0116 04:06:19.101195  516047 main.go:141] libmachine: (custom-flannel-087557) DBG | domain custom-flannel-087557 has defined MAC address 52:54:00:fc:19:0e in network mk-custom-flannel-087557
	I0116 04:06:19.101731  516047 main.go:141] libmachine: (custom-flannel-087557) DBG | unable to find current IP address of domain custom-flannel-087557 in network mk-custom-flannel-087557
	I0116 04:06:19.101764  516047 main.go:141] libmachine: (custom-flannel-087557) DBG | I0116 04:06:19.101672  517501 retry.go:31] will retry after 432.360079ms: waiting for machine to come up
	I0116 04:06:19.535605  516047 main.go:141] libmachine: (custom-flannel-087557) DBG | domain custom-flannel-087557 has defined MAC address 52:54:00:fc:19:0e in network mk-custom-flannel-087557
	I0116 04:06:19.536166  516047 main.go:141] libmachine: (custom-flannel-087557) DBG | unable to find current IP address of domain custom-flannel-087557 in network mk-custom-flannel-087557
	I0116 04:06:19.536194  516047 main.go:141] libmachine: (custom-flannel-087557) DBG | I0116 04:06:19.536116  517501 retry.go:31] will retry after 374.90817ms: waiting for machine to come up
	I0116 04:06:19.912894  516047 main.go:141] libmachine: (custom-flannel-087557) DBG | domain custom-flannel-087557 has defined MAC address 52:54:00:fc:19:0e in network mk-custom-flannel-087557
	I0116 04:06:19.913531  516047 main.go:141] libmachine: (custom-flannel-087557) DBG | unable to find current IP address of domain custom-flannel-087557 in network mk-custom-flannel-087557
	I0116 04:06:19.913566  516047 main.go:141] libmachine: (custom-flannel-087557) DBG | I0116 04:06:19.913504  517501 retry.go:31] will retry after 492.464252ms: waiting for machine to come up
	I0116 04:06:20.407210  516047 main.go:141] libmachine: (custom-flannel-087557) DBG | domain custom-flannel-087557 has defined MAC address 52:54:00:fc:19:0e in network mk-custom-flannel-087557
	I0116 04:06:20.407815  516047 main.go:141] libmachine: (custom-flannel-087557) DBG | unable to find current IP address of domain custom-flannel-087557 in network mk-custom-flannel-087557
	I0116 04:06:20.407856  516047 main.go:141] libmachine: (custom-flannel-087557) DBG | I0116 04:06:20.407762  517501 retry.go:31] will retry after 604.736477ms: waiting for machine to come up
	I0116 04:06:21.014707  516047 main.go:141] libmachine: (custom-flannel-087557) DBG | domain custom-flannel-087557 has defined MAC address 52:54:00:fc:19:0e in network mk-custom-flannel-087557
	I0116 04:06:21.015266  516047 main.go:141] libmachine: (custom-flannel-087557) DBG | unable to find current IP address of domain custom-flannel-087557 in network mk-custom-flannel-087557
	I0116 04:06:21.015322  516047 main.go:141] libmachine: (custom-flannel-087557) DBG | I0116 04:06:21.015243  517501 retry.go:31] will retry after 995.742524ms: waiting for machine to come up
	I0116 04:06:21.324444  515760 crio.go:444] Took 1.963323 seconds to copy over tarball
	I0116 04:06:21.324542  515760 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0116 04:06:24.785425  515760 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.460821392s)
	I0116 04:06:24.785461  515760 crio.go:451] Took 3.460987 seconds to extract the tarball
	I0116 04:06:24.785474  515760 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0116 04:06:24.834953  515760 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 04:06:24.920496  515760 crio.go:496] all images are preloaded for cri-o runtime.
	I0116 04:06:24.920524  515760 cache_images.go:84] Images are preloaded, skipping loading
	I0116 04:06:24.920597  515760 ssh_runner.go:195] Run: crio config
	I0116 04:06:25.001332  515760 cni.go:84] Creating CNI manager for "calico"
	I0116 04:06:25.001377  515760 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 04:06:25.001406  515760 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.99 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-087557 NodeName:calico-087557 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.99"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.99 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0116 04:06:25.001619  515760 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.99
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "calico-087557"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.99
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.99"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 04:06:25.001697  515760 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=calico-087557 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.99
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:calico-087557 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:}
	I0116 04:06:25.001768  515760 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0116 04:06:25.011679  515760 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 04:06:25.011764  515760 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0116 04:06:25.021195  515760 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I0116 04:06:25.042787  515760 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0116 04:06:25.061620  515760 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2097 bytes)
	I0116 04:06:25.078122  515760 ssh_runner.go:195] Run: grep 192.168.61.99	control-plane.minikube.internal$ /etc/hosts
	I0116 04:06:25.082599  515760 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.99	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 04:06:25.100354  515760 certs.go:56] Setting up /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/calico-087557 for IP: 192.168.61.99
	I0116 04:06:25.100404  515760 certs.go:190] acquiring lock for shared ca certs: {Name:mkab5aeea3e0ed69f3592dc8f418e07c6075b130 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 04:06:25.100650  515760 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17965-468241/.minikube/ca.key
	I0116 04:06:25.100737  515760 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.key
	I0116 04:06:25.100815  515760 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/calico-087557/client.key
	I0116 04:06:25.100834  515760 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/calico-087557/client.crt with IP's: []
	I0116 04:06:25.173244  515760 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/calico-087557/client.crt ...
	I0116 04:06:25.173284  515760 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/calico-087557/client.crt: {Name:mk413f4ad0ac487e458f0b22900774c7067a293f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 04:06:25.173518  515760 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/calico-087557/client.key ...
	I0116 04:06:25.173539  515760 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/calico-087557/client.key: {Name:mk5571510187fd931f1427cb03dbc45d117409fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 04:06:25.173649  515760 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/calico-087557/apiserver.key.3f823191
	I0116 04:06:25.173668  515760 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/calico-087557/apiserver.crt.3f823191 with IP's: [192.168.61.99 10.96.0.1 127.0.0.1 10.0.0.1]
	I0116 04:06:25.289464  515760 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/calico-087557/apiserver.crt.3f823191 ...
	I0116 04:06:25.289504  515760 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/calico-087557/apiserver.crt.3f823191: {Name:mkb9ddf9f13eb9a0f983bf9cd98c9868c6b12858 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 04:06:25.289700  515760 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/calico-087557/apiserver.key.3f823191 ...
	I0116 04:06:25.289724  515760 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/calico-087557/apiserver.key.3f823191: {Name:mkf6a2173f67ba8ccab1799be900419247a402b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 04:06:25.289830  515760 certs.go:337] copying /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/calico-087557/apiserver.crt.3f823191 -> /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/calico-087557/apiserver.crt
	I0116 04:06:25.289918  515760 certs.go:341] copying /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/calico-087557/apiserver.key.3f823191 -> /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/calico-087557/apiserver.key
	I0116 04:06:25.290006  515760 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/calico-087557/proxy-client.key
	I0116 04:06:25.290027  515760 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/calico-087557/proxy-client.crt with IP's: []
	I0116 04:06:25.355344  515760 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/calico-087557/proxy-client.crt ...
	I0116 04:06:25.355387  515760 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/calico-087557/proxy-client.crt: {Name:mk424c3c3034cc9d1605cc35e42eb8b555f11325 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 04:06:25.355607  515760 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/calico-087557/proxy-client.key ...
	I0116 04:06:25.355629  515760 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/calico-087557/proxy-client.key: {Name:mka4a884fce9ce91986597334a11b59831cb7264 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 04:06:25.355874  515760 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478.pem (1338 bytes)
	W0116 04:06:25.355939  515760 certs.go:433] ignoring /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478_empty.pem, impossibly tiny 0 bytes
	I0116 04:06:25.355955  515760 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca-key.pem (1679 bytes)
	I0116 04:06:25.356001  515760 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem (1078 bytes)
	I0116 04:06:25.356072  515760 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem (1123 bytes)
	I0116 04:06:25.356112  515760 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem (1679 bytes)
	I0116 04:06:25.356197  515760 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem (1708 bytes)
	I0116 04:06:25.356901  515760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/calico-087557/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0116 04:06:25.386541  515760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/calico-087557/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0116 04:06:25.413693  515760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/calico-087557/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0116 04:06:25.441011  515760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/calico-087557/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0116 04:06:25.466678  515760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 04:06:25.493505  515760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0116 04:06:25.519838  515760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 04:06:25.549514  515760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 04:06:25.576871  515760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 04:06:25.604344  515760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478.pem --> /usr/share/ca-certificates/475478.pem (1338 bytes)
	I0116 04:06:25.634872  515760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem --> /usr/share/ca-certificates/4754782.pem (1708 bytes)
	I0116 04:06:25.662616  515760 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0116 04:06:25.681576  515760 ssh_runner.go:195] Run: openssl version
	I0116 04:06:25.687742  515760 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 04:06:25.698468  515760 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 04:06:25.704018  515760 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 02:35 /usr/share/ca-certificates/minikubeCA.pem
	I0116 04:06:25.704119  515760 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 04:06:25.711064  515760 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 04:06:25.723342  515760 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/475478.pem && ln -fs /usr/share/ca-certificates/475478.pem /etc/ssl/certs/475478.pem"
	I0116 04:06:25.734805  515760 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/475478.pem
	I0116 04:06:25.740632  515760 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 02:44 /usr/share/ca-certificates/475478.pem
	I0116 04:06:25.740720  515760 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/475478.pem
	I0116 04:06:25.747692  515760 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/475478.pem /etc/ssl/certs/51391683.0"
	I0116 04:06:25.759562  515760 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4754782.pem && ln -fs /usr/share/ca-certificates/4754782.pem /etc/ssl/certs/4754782.pem"
	I0116 04:06:25.771419  515760 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4754782.pem
	I0116 04:06:25.777157  515760 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 02:44 /usr/share/ca-certificates/4754782.pem
	I0116 04:06:25.777238  515760 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4754782.pem
	I0116 04:06:25.783880  515760 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4754782.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 04:06:25.794638  515760 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 04:06:25.799915  515760 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0116 04:06:25.800012  515760 kubeadm.go:404] StartCluster: {Name:calico-087557 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:calico-087557 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.99 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 04:06:25.800178  515760 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0116 04:06:25.800253  515760 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 04:06:25.846600  515760 cri.go:89] found id: ""
	I0116 04:06:25.846709  515760 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0116 04:06:25.856553  515760 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 04:06:25.866690  515760 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 04:06:25.877043  515760 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 04:06:25.877121  515760 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0116 04:06:25.932365  515760 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0116 04:06:25.932447  515760 kubeadm.go:322] [preflight] Running pre-flight checks
	I0116 04:06:26.081620  515760 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0116 04:06:26.081808  515760 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0116 04:06:26.081976  515760 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0116 04:06:26.358449  515760 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0116 04:06:22.012489  516047 main.go:141] libmachine: (custom-flannel-087557) DBG | domain custom-flannel-087557 has defined MAC address 52:54:00:fc:19:0e in network mk-custom-flannel-087557
	I0116 04:06:22.013110  516047 main.go:141] libmachine: (custom-flannel-087557) DBG | unable to find current IP address of domain custom-flannel-087557 in network mk-custom-flannel-087557
	I0116 04:06:22.013147  516047 main.go:141] libmachine: (custom-flannel-087557) DBG | I0116 04:06:22.013025  517501 retry.go:31] will retry after 1.296694435s: waiting for machine to come up
	I0116 04:06:23.311727  516047 main.go:141] libmachine: (custom-flannel-087557) DBG | domain custom-flannel-087557 has defined MAC address 52:54:00:fc:19:0e in network mk-custom-flannel-087557
	I0116 04:06:23.312390  516047 main.go:141] libmachine: (custom-flannel-087557) DBG | unable to find current IP address of domain custom-flannel-087557 in network mk-custom-flannel-087557
	I0116 04:06:23.312425  516047 main.go:141] libmachine: (custom-flannel-087557) DBG | I0116 04:06:23.312346  517501 retry.go:31] will retry after 1.736893256s: waiting for machine to come up
	I0116 04:06:25.050808  516047 main.go:141] libmachine: (custom-flannel-087557) DBG | domain custom-flannel-087557 has defined MAC address 52:54:00:fc:19:0e in network mk-custom-flannel-087557
	I0116 04:06:25.051321  516047 main.go:141] libmachine: (custom-flannel-087557) DBG | unable to find current IP address of domain custom-flannel-087557 in network mk-custom-flannel-087557
	I0116 04:06:25.051356  516047 main.go:141] libmachine: (custom-flannel-087557) DBG | I0116 04:06:25.051270  517501 retry.go:31] will retry after 1.447293124s: waiting for machine to come up
	I0116 04:06:26.501033  516047 main.go:141] libmachine: (custom-flannel-087557) DBG | domain custom-flannel-087557 has defined MAC address 52:54:00:fc:19:0e in network mk-custom-flannel-087557
	I0116 04:06:26.501621  516047 main.go:141] libmachine: (custom-flannel-087557) DBG | unable to find current IP address of domain custom-flannel-087557 in network mk-custom-flannel-087557
	I0116 04:06:26.501649  516047 main.go:141] libmachine: (custom-flannel-087557) DBG | I0116 04:06:26.501569  517501 retry.go:31] will retry after 2.237142111s: waiting for machine to come up
	I0116 04:06:26.477945  515760 out.go:204]   - Generating certificates and keys ...
	I0116 04:06:26.478130  515760 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0116 04:06:26.478242  515760 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0116 04:06:26.482368  515760 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0116 04:06:26.542105  515760 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0116 04:06:26.823575  515760 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0116 04:06:27.161044  515760 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0116 04:06:27.215527  515760 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0116 04:06:27.215705  515760 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [calico-087557 localhost] and IPs [192.168.61.99 127.0.0.1 ::1]
	I0116 04:06:27.365385  515760 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0116 04:06:27.365600  515760 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [calico-087557 localhost] and IPs [192.168.61.99 127.0.0.1 ::1]
	I0116 04:06:27.424476  515760 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0116 04:06:27.644804  515760 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0116 04:06:27.757954  515760 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0116 04:06:27.758295  515760 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0116 04:06:28.136862  515760 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0116 04:06:28.365410  515760 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0116 04:06:28.635584  515760 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0116 04:06:28.822419  515760 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0116 04:06:28.823008  515760 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0116 04:06:28.825590  515760 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0116 04:06:28.828123  515760 out.go:204]   - Booting up control plane ...
	I0116 04:06:28.828279  515760 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0116 04:06:28.828392  515760 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0116 04:06:28.830932  515760 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0116 04:06:28.849398  515760 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0116 04:06:28.849877  515760 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0116 04:06:28.850023  515760 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0116 04:06:29.023587  515760 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	
	
	==> CRI-O <==
	-- Journal begins at Tue 2024-01-16 03:44:00 UTC, ends at Tue 2024-01-16 04:06:30 UTC. --
	Jan 16 04:06:30 default-k8s-diff-port-434445 crio[734]: time="2024-01-16 04:06:30.169501381Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705377990169476557,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=ddedd2f4-0978-4739-991e-a22a77bf5ea8 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 04:06:30 default-k8s-diff-port-434445 crio[734]: time="2024-01-16 04:06:30.170542846Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=374e3c51-844b-4029-9d0d-2f758231b1f7 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 04:06:30 default-k8s-diff-port-434445 crio[734]: time="2024-01-16 04:06:30.170614050Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=374e3c51-844b-4029-9d0d-2f758231b1f7 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 04:06:30 default-k8s-diff-port-434445 crio[734]: time="2024-01-16 04:06:30.170880886Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2bc79d2ed159177045ced7d31622ca72da51b64db46b8371b62d9f4fdd3e34a3,PodSandboxId:4561671f5fcb007566d4db43fecf2846c64dc43235451e5f6b0f65b582f95b10,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1705376688774713895,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3f347086-cbef-4c9e-b11c-1a72f9c19ae7,},Annotations:map[string]string{io.kubernetes.container.hash: 5da410ea,io.kubernetes.container.restartCount: 1,io.kubernetes
.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a07ae23e6e9e3c589264d0be7095896549a4b71fa2d0b06805a3b1b3bea16f6a,PodSandboxId:b6d30bb49a20301387ae7d8e9e003dd1b636d0a9dfcda82b07590a91cbcdde66,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705376687094229686,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pmx8n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d3c9941-8938-4d4a-b6d8-9b542ca6f1ca,},Annotations:map[string]string{io.kubernetes.container.hash: 77baf89e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33ba3a03d878abc86a194defd715bb61a0066e49063e3823002bd3ec421da138,PodSandboxId:99327b7ab530194d5cd29db66547aa5b2145dc98a9c91851c9b0be0b0559744a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705376680388302973,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 16fd4585-3d75-40c3-a28d-4134375f4e3d,},Annotations:map[string]string{io.kubernetes.container.hash: 4cba5769,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4b27881ef90cea325e8f306fac88a76052eec15a693c291288664b8f4ebcc2a,PodSandboxId:99327b7ab530194d5cd29db66547aa5b2145dc98a9c91851c9b0be0b0559744a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1705376679282926091,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 16fd4585-3d75-40c3-a28d-4134375f4e3d,},Annotations:map[string]string{io.kubernetes.container.hash: 4cba5769,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44f71a7069827d97c7326cbd36638ec36ff39cbbe0a1e4e61e7c010a38d2e123,PodSandboxId:cf8dd051894cf58df172502fa9f75fb2d8f730055919321a8de103caf178242e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705376679281404110,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dcbqg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
ba1f9bf-6aa7-40cd-b57c-745c2d0cc414,},Annotations:map[string]string{io.kubernetes.container.hash: 1d2c9957,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2758ac4468b10765b35629ffc54228655bc18f79a654cc03e91929ebdc3cf90,PodSandboxId:783287f7b4e9cf031d72eb66efe436eba5ab0a30f24ebb043333f6ff3807d918,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705376673348681964,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-434445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 912de32c93aa16eea0b5111acb3790b0,},An
notations:map[string]string{io.kubernetes.container.hash: db6a5abf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e60387e0e2800d67c99eb3566f31ad675c0db6c22379fc28ee9a447af5f5023c,PodSandboxId:9da14cfaa8df2939b5d42680f6cfbe488680ccdc33024aa69d28f299aee16e81,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705376672824748403,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-434445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5781851edbe2deb41d2d85e284e5498,},An
notations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1438a3832328a03b94c3d408a2e332c0fb0adab915a9d4f26994e84c0509fac9,PodSandboxId:046c0722a41e06a9d2a31bed7e3a5ed7d20aa4471027282eb3b81ce385d51607,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705376672383994818,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-434445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
69452d8d25407a36c42c29e7263d7a5,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9861ff0fbab72660648bfcb49b82810bada99f73a55cb0aa359ba114825b8f3,PodSandboxId:83a160f9a9ab53bd3efcf9446a3cb64629883944e6b11993834ed1cba2cd3565,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705376672261780520,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-434445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8
ea8b0a4a0eac607795856ec116732b7,},Annotations:map[string]string{io.kubernetes.container.hash: 1c54be68,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=374e3c51-844b-4029-9d0d-2f758231b1f7 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 04:06:30 default-k8s-diff-port-434445 crio[734]: time="2024-01-16 04:06:30.218312655Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=d9bfa3a0-4c35-483a-8876-6d9ba30d493f name=/runtime.v1.RuntimeService/Version
	Jan 16 04:06:30 default-k8s-diff-port-434445 crio[734]: time="2024-01-16 04:06:30.218395267Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=d9bfa3a0-4c35-483a-8876-6d9ba30d493f name=/runtime.v1.RuntimeService/Version
	Jan 16 04:06:30 default-k8s-diff-port-434445 crio[734]: time="2024-01-16 04:06:30.221885983Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=d8df7c5e-5ee6-4619-b8b7-f58af4625906 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 04:06:30 default-k8s-diff-port-434445 crio[734]: time="2024-01-16 04:06:30.222655530Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705377990222629844,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=d8df7c5e-5ee6-4619-b8b7-f58af4625906 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 04:06:30 default-k8s-diff-port-434445 crio[734]: time="2024-01-16 04:06:30.223511217Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c0977fad-b029-4066-b4e0-cacd43596702 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 04:06:30 default-k8s-diff-port-434445 crio[734]: time="2024-01-16 04:06:30.223565621Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c0977fad-b029-4066-b4e0-cacd43596702 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 04:06:30 default-k8s-diff-port-434445 crio[734]: time="2024-01-16 04:06:30.223766197Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2bc79d2ed159177045ced7d31622ca72da51b64db46b8371b62d9f4fdd3e34a3,PodSandboxId:4561671f5fcb007566d4db43fecf2846c64dc43235451e5f6b0f65b582f95b10,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1705376688774713895,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3f347086-cbef-4c9e-b11c-1a72f9c19ae7,},Annotations:map[string]string{io.kubernetes.container.hash: 5da410ea,io.kubernetes.container.restartCount: 1,io.kubernetes
.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a07ae23e6e9e3c589264d0be7095896549a4b71fa2d0b06805a3b1b3bea16f6a,PodSandboxId:b6d30bb49a20301387ae7d8e9e003dd1b636d0a9dfcda82b07590a91cbcdde66,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705376687094229686,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pmx8n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d3c9941-8938-4d4a-b6d8-9b542ca6f1ca,},Annotations:map[string]string{io.kubernetes.container.hash: 77baf89e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33ba3a03d878abc86a194defd715bb61a0066e49063e3823002bd3ec421da138,PodSandboxId:99327b7ab530194d5cd29db66547aa5b2145dc98a9c91851c9b0be0b0559744a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705376680388302973,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 16fd4585-3d75-40c3-a28d-4134375f4e3d,},Annotations:map[string]string{io.kubernetes.container.hash: 4cba5769,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4b27881ef90cea325e8f306fac88a76052eec15a693c291288664b8f4ebcc2a,PodSandboxId:99327b7ab530194d5cd29db66547aa5b2145dc98a9c91851c9b0be0b0559744a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1705376679282926091,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 16fd4585-3d75-40c3-a28d-4134375f4e3d,},Annotations:map[string]string{io.kubernetes.container.hash: 4cba5769,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44f71a7069827d97c7326cbd36638ec36ff39cbbe0a1e4e61e7c010a38d2e123,PodSandboxId:cf8dd051894cf58df172502fa9f75fb2d8f730055919321a8de103caf178242e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705376679281404110,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dcbqg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
ba1f9bf-6aa7-40cd-b57c-745c2d0cc414,},Annotations:map[string]string{io.kubernetes.container.hash: 1d2c9957,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2758ac4468b10765b35629ffc54228655bc18f79a654cc03e91929ebdc3cf90,PodSandboxId:783287f7b4e9cf031d72eb66efe436eba5ab0a30f24ebb043333f6ff3807d918,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705376673348681964,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-434445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 912de32c93aa16eea0b5111acb3790b0,},An
notations:map[string]string{io.kubernetes.container.hash: db6a5abf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e60387e0e2800d67c99eb3566f31ad675c0db6c22379fc28ee9a447af5f5023c,PodSandboxId:9da14cfaa8df2939b5d42680f6cfbe488680ccdc33024aa69d28f299aee16e81,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705376672824748403,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-434445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5781851edbe2deb41d2d85e284e5498,},An
notations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1438a3832328a03b94c3d408a2e332c0fb0adab915a9d4f26994e84c0509fac9,PodSandboxId:046c0722a41e06a9d2a31bed7e3a5ed7d20aa4471027282eb3b81ce385d51607,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705376672383994818,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-434445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
69452d8d25407a36c42c29e7263d7a5,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9861ff0fbab72660648bfcb49b82810bada99f73a55cb0aa359ba114825b8f3,PodSandboxId:83a160f9a9ab53bd3efcf9446a3cb64629883944e6b11993834ed1cba2cd3565,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705376672261780520,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-434445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8
ea8b0a4a0eac607795856ec116732b7,},Annotations:map[string]string{io.kubernetes.container.hash: 1c54be68,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c0977fad-b029-4066-b4e0-cacd43596702 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 04:06:30 default-k8s-diff-port-434445 crio[734]: time="2024-01-16 04:06:30.272534573Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=05c9df45-5555-406c-a653-bce87d54c835 name=/runtime.v1.RuntimeService/Version
	Jan 16 04:06:30 default-k8s-diff-port-434445 crio[734]: time="2024-01-16 04:06:30.272595965Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=05c9df45-5555-406c-a653-bce87d54c835 name=/runtime.v1.RuntimeService/Version
	Jan 16 04:06:30 default-k8s-diff-port-434445 crio[734]: time="2024-01-16 04:06:30.273918882Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=5276793f-8901-4f8a-b4e9-42a86cc3d298 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 04:06:30 default-k8s-diff-port-434445 crio[734]: time="2024-01-16 04:06:30.274423319Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705377990274408622,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=5276793f-8901-4f8a-b4e9-42a86cc3d298 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 04:06:30 default-k8s-diff-port-434445 crio[734]: time="2024-01-16 04:06:30.275223591Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=baa2b0bc-86e6-4449-9979-63d9b35880f2 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 04:06:30 default-k8s-diff-port-434445 crio[734]: time="2024-01-16 04:06:30.275270228Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=baa2b0bc-86e6-4449-9979-63d9b35880f2 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 04:06:30 default-k8s-diff-port-434445 crio[734]: time="2024-01-16 04:06:30.275470582Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2bc79d2ed159177045ced7d31622ca72da51b64db46b8371b62d9f4fdd3e34a3,PodSandboxId:4561671f5fcb007566d4db43fecf2846c64dc43235451e5f6b0f65b582f95b10,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1705376688774713895,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3f347086-cbef-4c9e-b11c-1a72f9c19ae7,},Annotations:map[string]string{io.kubernetes.container.hash: 5da410ea,io.kubernetes.container.restartCount: 1,io.kubernetes
.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a07ae23e6e9e3c589264d0be7095896549a4b71fa2d0b06805a3b1b3bea16f6a,PodSandboxId:b6d30bb49a20301387ae7d8e9e003dd1b636d0a9dfcda82b07590a91cbcdde66,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705376687094229686,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pmx8n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d3c9941-8938-4d4a-b6d8-9b542ca6f1ca,},Annotations:map[string]string{io.kubernetes.container.hash: 77baf89e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33ba3a03d878abc86a194defd715bb61a0066e49063e3823002bd3ec421da138,PodSandboxId:99327b7ab530194d5cd29db66547aa5b2145dc98a9c91851c9b0be0b0559744a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705376680388302973,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 16fd4585-3d75-40c3-a28d-4134375f4e3d,},Annotations:map[string]string{io.kubernetes.container.hash: 4cba5769,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4b27881ef90cea325e8f306fac88a76052eec15a693c291288664b8f4ebcc2a,PodSandboxId:99327b7ab530194d5cd29db66547aa5b2145dc98a9c91851c9b0be0b0559744a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1705376679282926091,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 16fd4585-3d75-40c3-a28d-4134375f4e3d,},Annotations:map[string]string{io.kubernetes.container.hash: 4cba5769,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44f71a7069827d97c7326cbd36638ec36ff39cbbe0a1e4e61e7c010a38d2e123,PodSandboxId:cf8dd051894cf58df172502fa9f75fb2d8f730055919321a8de103caf178242e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705376679281404110,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dcbqg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
ba1f9bf-6aa7-40cd-b57c-745c2d0cc414,},Annotations:map[string]string{io.kubernetes.container.hash: 1d2c9957,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2758ac4468b10765b35629ffc54228655bc18f79a654cc03e91929ebdc3cf90,PodSandboxId:783287f7b4e9cf031d72eb66efe436eba5ab0a30f24ebb043333f6ff3807d918,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705376673348681964,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-434445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 912de32c93aa16eea0b5111acb3790b0,},An
notations:map[string]string{io.kubernetes.container.hash: db6a5abf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e60387e0e2800d67c99eb3566f31ad675c0db6c22379fc28ee9a447af5f5023c,PodSandboxId:9da14cfaa8df2939b5d42680f6cfbe488680ccdc33024aa69d28f299aee16e81,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705376672824748403,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-434445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5781851edbe2deb41d2d85e284e5498,},An
notations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1438a3832328a03b94c3d408a2e332c0fb0adab915a9d4f26994e84c0509fac9,PodSandboxId:046c0722a41e06a9d2a31bed7e3a5ed7d20aa4471027282eb3b81ce385d51607,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705376672383994818,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-434445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
69452d8d25407a36c42c29e7263d7a5,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9861ff0fbab72660648bfcb49b82810bada99f73a55cb0aa359ba114825b8f3,PodSandboxId:83a160f9a9ab53bd3efcf9446a3cb64629883944e6b11993834ed1cba2cd3565,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705376672261780520,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-434445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8
ea8b0a4a0eac607795856ec116732b7,},Annotations:map[string]string{io.kubernetes.container.hash: 1c54be68,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=baa2b0bc-86e6-4449-9979-63d9b35880f2 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 04:06:30 default-k8s-diff-port-434445 crio[734]: time="2024-01-16 04:06:30.328815449Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=31ead492-6df9-4248-a95b-e00dd73ccba8 name=/runtime.v1.RuntimeService/Version
	Jan 16 04:06:30 default-k8s-diff-port-434445 crio[734]: time="2024-01-16 04:06:30.328897550Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=31ead492-6df9-4248-a95b-e00dd73ccba8 name=/runtime.v1.RuntimeService/Version
	Jan 16 04:06:30 default-k8s-diff-port-434445 crio[734]: time="2024-01-16 04:06:30.330507045Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=5d3a4c20-c8f5-415b-ac69-728e674e62e0 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 04:06:30 default-k8s-diff-port-434445 crio[734]: time="2024-01-16 04:06:30.331043626Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705377990331026105,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=5d3a4c20-c8f5-415b-ac69-728e674e62e0 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 04:06:30 default-k8s-diff-port-434445 crio[734]: time="2024-01-16 04:06:30.331896740Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=aedc5f5c-ea69-4501-8670-dd87ccd82a20 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 04:06:30 default-k8s-diff-port-434445 crio[734]: time="2024-01-16 04:06:30.331968757Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=aedc5f5c-ea69-4501-8670-dd87ccd82a20 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 04:06:30 default-k8s-diff-port-434445 crio[734]: time="2024-01-16 04:06:30.332775461Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2bc79d2ed159177045ced7d31622ca72da51b64db46b8371b62d9f4fdd3e34a3,PodSandboxId:4561671f5fcb007566d4db43fecf2846c64dc43235451e5f6b0f65b582f95b10,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1705376688774713895,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3f347086-cbef-4c9e-b11c-1a72f9c19ae7,},Annotations:map[string]string{io.kubernetes.container.hash: 5da410ea,io.kubernetes.container.restartCount: 1,io.kubernetes
.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a07ae23e6e9e3c589264d0be7095896549a4b71fa2d0b06805a3b1b3bea16f6a,PodSandboxId:b6d30bb49a20301387ae7d8e9e003dd1b636d0a9dfcda82b07590a91cbcdde66,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705376687094229686,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pmx8n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d3c9941-8938-4d4a-b6d8-9b542ca6f1ca,},Annotations:map[string]string{io.kubernetes.container.hash: 77baf89e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33ba3a03d878abc86a194defd715bb61a0066e49063e3823002bd3ec421da138,PodSandboxId:99327b7ab530194d5cd29db66547aa5b2145dc98a9c91851c9b0be0b0559744a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705376680388302973,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 16fd4585-3d75-40c3-a28d-4134375f4e3d,},Annotations:map[string]string{io.kubernetes.container.hash: 4cba5769,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4b27881ef90cea325e8f306fac88a76052eec15a693c291288664b8f4ebcc2a,PodSandboxId:99327b7ab530194d5cd29db66547aa5b2145dc98a9c91851c9b0be0b0559744a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1705376679282926091,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 16fd4585-3d75-40c3-a28d-4134375f4e3d,},Annotations:map[string]string{io.kubernetes.container.hash: 4cba5769,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44f71a7069827d97c7326cbd36638ec36ff39cbbe0a1e4e61e7c010a38d2e123,PodSandboxId:cf8dd051894cf58df172502fa9f75fb2d8f730055919321a8de103caf178242e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705376679281404110,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dcbqg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
ba1f9bf-6aa7-40cd-b57c-745c2d0cc414,},Annotations:map[string]string{io.kubernetes.container.hash: 1d2c9957,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2758ac4468b10765b35629ffc54228655bc18f79a654cc03e91929ebdc3cf90,PodSandboxId:783287f7b4e9cf031d72eb66efe436eba5ab0a30f24ebb043333f6ff3807d918,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705376673348681964,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-434445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 912de32c93aa16eea0b5111acb3790b0,},An
notations:map[string]string{io.kubernetes.container.hash: db6a5abf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e60387e0e2800d67c99eb3566f31ad675c0db6c22379fc28ee9a447af5f5023c,PodSandboxId:9da14cfaa8df2939b5d42680f6cfbe488680ccdc33024aa69d28f299aee16e81,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705376672824748403,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-434445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5781851edbe2deb41d2d85e284e5498,},An
notations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1438a3832328a03b94c3d408a2e332c0fb0adab915a9d4f26994e84c0509fac9,PodSandboxId:046c0722a41e06a9d2a31bed7e3a5ed7d20aa4471027282eb3b81ce385d51607,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705376672383994818,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-434445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
69452d8d25407a36c42c29e7263d7a5,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9861ff0fbab72660648bfcb49b82810bada99f73a55cb0aa359ba114825b8f3,PodSandboxId:83a160f9a9ab53bd3efcf9446a3cb64629883944e6b11993834ed1cba2cd3565,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705376672261780520,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-434445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8
ea8b0a4a0eac607795856ec116732b7,},Annotations:map[string]string{io.kubernetes.container.hash: 1c54be68,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=aedc5f5c-ea69-4501-8670-dd87ccd82a20 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2bc79d2ed1591       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   21 minutes ago      Running             busybox                   1                   4561671f5fcb0       busybox
	a07ae23e6e9e3       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      21 minutes ago      Running             coredns                   1                   b6d30bb49a203       coredns-5dd5756b68-pmx8n
	33ba3a03d878a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      21 minutes ago      Running             storage-provisioner       3                   99327b7ab5301       storage-provisioner
	a4b27881ef90c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      21 minutes ago      Exited              storage-provisioner       2                   99327b7ab5301       storage-provisioner
	44f71a7069827       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      21 minutes ago      Running             kube-proxy                1                   cf8dd051894cf       kube-proxy-dcbqg
	e2758ac4468b1       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      21 minutes ago      Running             etcd                      1                   783287f7b4e9c       etcd-default-k8s-diff-port-434445
	e60387e0e2800       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      21 minutes ago      Running             kube-scheduler            1                   9da14cfaa8df2       kube-scheduler-default-k8s-diff-port-434445
	1438a3832328a       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      21 minutes ago      Running             kube-controller-manager   1                   046c0722a41e0       kube-controller-manager-default-k8s-diff-port-434445
	f9861ff0fbab7       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      21 minutes ago      Running             kube-apiserver            1                   83a160f9a9ab5       kube-apiserver-default-k8s-diff-port-434445
	
	
	==> coredns [a07ae23e6e9e3c589264d0be7095896549a4b71fa2d0b06805a3b1b3bea16f6a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:57065 - 55204 "HINFO IN 8254892050912566778.576422238651280398. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.008852051s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-434445
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-434445
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e8fa5f64d0e7272be43ff25ed3826261f0a2578
	                    minikube.k8s.io/name=default-k8s-diff-port-434445
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_16T03_37_14_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Jan 2024 03:37:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-434445
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Jan 2024 04:06:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Jan 2024 04:05:31 +0000   Tue, 16 Jan 2024 03:37:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Jan 2024 04:05:31 +0000   Tue, 16 Jan 2024 03:37:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Jan 2024 04:05:31 +0000   Tue, 16 Jan 2024 03:37:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Jan 2024 04:05:31 +0000   Tue, 16 Jan 2024 03:44:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.236
	  Hostname:    default-k8s-diff-port-434445
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 163fbf991c964ac0a0d338e8efd64b6b
	  System UUID:                163fbf99-1c96-4ac0-a0d3-38e8efd64b6b
	  Boot ID:                    8cd7e9b2-7d8c-46ff-a75a-a4d21eb06250
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 coredns-5dd5756b68-pmx8n                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     29m
	  kube-system                 etcd-default-k8s-diff-port-434445                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         29m
	  kube-system                 kube-apiserver-default-k8s-diff-port-434445             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-434445    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-proxy-dcbqg                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-scheduler-default-k8s-diff-port-434445             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 metrics-server-57f55c9bc5-894n2                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 21m                kube-proxy       
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node default-k8s-diff-port-434445 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node default-k8s-diff-port-434445 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29m                kubelet          Node default-k8s-diff-port-434445 status is now: NodeHasSufficientPID
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeReady                29m                kubelet          Node default-k8s-diff-port-434445 status is now: NodeReady
	  Normal  RegisteredNode           29m                node-controller  Node default-k8s-diff-port-434445 event: Registered Node default-k8s-diff-port-434445 in Controller
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-434445 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-434445 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node default-k8s-diff-port-434445 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           21m                node-controller  Node default-k8s-diff-port-434445 event: Registered Node default-k8s-diff-port-434445 in Controller
	
	
	==> dmesg <==
	[Jan16 03:43] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.087035] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.752421] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.467470] systemd-fstab-generator[113]: Ignoring "noauto" for root device
	[  +0.142678] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[Jan16 03:44] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.901294] systemd-fstab-generator[661]: Ignoring "noauto" for root device
	[  +0.160476] systemd-fstab-generator[672]: Ignoring "noauto" for root device
	[  +0.188007] systemd-fstab-generator[685]: Ignoring "noauto" for root device
	[  +0.111181] systemd-fstab-generator[696]: Ignoring "noauto" for root device
	[  +0.250435] systemd-fstab-generator[720]: Ignoring "noauto" for root device
	[ +18.727963] systemd-fstab-generator[933]: Ignoring "noauto" for root device
	[ +15.002290] kauditd_printk_skb: 19 callbacks suppressed
	[  +9.621656] kauditd_printk_skb: 9 callbacks suppressed
	[Jan16 04:04] hrtimer: interrupt took 4491856 ns
	
	
	==> etcd [e2758ac4468b10765b35629ffc54228655bc18f79a654cc03e91929ebdc3cf90] <==
	{"level":"info","ts":"2024-01-16T03:59:35.514196Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1078,"took":"1.778608ms","hash":951186474}
	{"level":"info","ts":"2024-01-16T03:59:35.514294Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":951186474,"revision":1078,"compact-revision":836}
	{"level":"info","ts":"2024-01-16T04:04:04.237973Z","caller":"traceutil/trace.go:171","msg":"trace[1350288625] transaction","detail":"{read_only:false; response_revision:1539; number_of_response:1; }","duration":"162.448361ms","start":"2024-01-16T04:04:04.075477Z","end":"2024-01-16T04:04:04.237926Z","steps":["trace[1350288625] 'process raft request'  (duration: 162.298895ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T04:04:04.500858Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.532484ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9695842149227396991 > lease_revoke:<id:068e8d105ef57f2f>","response":"size:27"}
	{"level":"info","ts":"2024-01-16T04:04:35.524952Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1321}
	{"level":"info","ts":"2024-01-16T04:04:35.527419Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1321,"took":"2.001839ms","hash":3371122492}
	{"level":"info","ts":"2024-01-16T04:04:35.527512Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3371122492,"revision":1321,"compact-revision":1078}
	{"level":"info","ts":"2024-01-16T04:04:56.77475Z","caller":"traceutil/trace.go:171","msg":"trace[1480556571] transaction","detail":"{read_only:false; response_revision:1582; number_of_response:1; }","duration":"103.907309ms","start":"2024-01-16T04:04:56.670809Z","end":"2024-01-16T04:04:56.774717Z","steps":["trace[1480556571] 'process raft request'  (duration: 103.677916ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-16T04:04:59.274658Z","caller":"traceutil/trace.go:171","msg":"trace[992213683] transaction","detail":"{read_only:false; response_revision:1584; number_of_response:1; }","duration":"208.217185ms","start":"2024-01-16T04:04:59.066421Z","end":"2024-01-16T04:04:59.274638Z","steps":["trace[992213683] 'process raft request'  (duration: 125.469924ms)","trace[992213683] 'compare'  (duration: 82.609048ms)"],"step_count":2}
	{"level":"warn","ts":"2024-01-16T04:05:19.409745Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"253.434657ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9695842149227397355 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:15-second id:068e8d105ef580ea>","response":"size:39"}
	{"level":"warn","ts":"2024-01-16T04:05:19.41018Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-16T04:05:19.019478Z","time spent":"390.68931ms","remote":"127.0.0.1:59164","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"info","ts":"2024-01-16T04:05:19.948374Z","caller":"traceutil/trace.go:171","msg":"trace[2081003992] linearizableReadLoop","detail":"{readStateIndex:1888; appliedIndex:1887; }","duration":"431.666458ms","start":"2024-01-16T04:05:19.516657Z","end":"2024-01-16T04:05:19.948323Z","steps":["trace[2081003992] 'read index received'  (duration: 349.022444ms)","trace[2081003992] 'applied index is now lower than readState.Index'  (duration: 82.64294ms)"],"step_count":2}
	{"level":"info","ts":"2024-01-16T04:05:19.948578Z","caller":"traceutil/trace.go:171","msg":"trace[50630462] transaction","detail":"{read_only:false; response_revision:1600; number_of_response:1; }","duration":"537.402465ms","start":"2024-01-16T04:05:19.411157Z","end":"2024-01-16T04:05:19.948559Z","steps":["trace[50630462] 'process raft request'  (duration: 454.608677ms)","trace[50630462] 'compare'  (duration: 82.439996ms)"],"step_count":2}
	{"level":"warn","ts":"2024-01-16T04:05:19.948837Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-16T04:05:19.411139Z","time spent":"537.506216ms","remote":"127.0.0.1:59164","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":120,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/masterleases/192.168.50.236\" mod_revision:1592 > success:<request_put:<key:\"/registry/masterleases/192.168.50.236\" value_size:67 lease:472470112372621546 >> failure:<request_range:<key:\"/registry/masterleases/192.168.50.236\" > >"}
	{"level":"warn","ts":"2024-01-16T04:05:19.948958Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"177.583899ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2024-01-16T04:05:19.948845Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"432.202534ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-16T04:05:19.949198Z","caller":"traceutil/trace.go:171","msg":"trace[2146007021] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1600; }","duration":"432.552275ms","start":"2024-01-16T04:05:19.516632Z","end":"2024-01-16T04:05:19.949184Z","steps":["trace[2146007021] 'agreement among raft nodes before linearized reading'  (duration: 431.953076ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-16T04:05:19.949022Z","caller":"traceutil/trace.go:171","msg":"trace[1504165692] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1600; }","duration":"177.700321ms","start":"2024-01-16T04:05:19.771299Z","end":"2024-01-16T04:05:19.949Z","steps":["trace[1504165692] 'agreement among raft nodes before linearized reading'  (duration: 177.570847ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T04:05:19.949245Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-16T04:05:19.516589Z","time spent":"432.640233ms","remote":"127.0.0.1:59200","response type":"/etcdserverpb.KV/Range","request count":0,"request size":76,"response count":0,"response size":27,"request content":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" "}
	{"level":"info","ts":"2024-01-16T04:05:25.173047Z","caller":"traceutil/trace.go:171","msg":"trace[1367120791] transaction","detail":"{read_only:false; response_revision:1605; number_of_response:1; }","duration":"147.972922ms","start":"2024-01-16T04:05:25.025051Z","end":"2024-01-16T04:05:25.173024Z","steps":["trace[1367120791] 'process raft request'  (duration: 147.830074ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-16T04:06:22.253879Z","caller":"traceutil/trace.go:171","msg":"trace[619385298] transaction","detail":"{read_only:false; response_revision:1653; number_of_response:1; }","duration":"179.39308ms","start":"2024-01-16T04:06:22.074441Z","end":"2024-01-16T04:06:22.253834Z","steps":["trace[619385298] 'process raft request'  (duration: 179.238605ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-16T04:06:25.890762Z","caller":"traceutil/trace.go:171","msg":"trace[116356750] linearizableReadLoop","detail":"{readStateIndex:1957; appliedIndex:1956; }","duration":"119.324721ms","start":"2024-01-16T04:06:25.771413Z","end":"2024-01-16T04:06:25.890738Z","steps":["trace[116356750] 'read index received'  (duration: 119.129359ms)","trace[116356750] 'applied index is now lower than readState.Index'  (duration: 194.847µs)"],"step_count":2}
	{"level":"warn","ts":"2024-01-16T04:06:25.890982Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"119.58364ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-16T04:06:25.891016Z","caller":"traceutil/trace.go:171","msg":"trace[1316935306] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1656; }","duration":"119.654706ms","start":"2024-01-16T04:06:25.771351Z","end":"2024-01-16T04:06:25.891006Z","steps":["trace[1316935306] 'agreement among raft nodes before linearized reading'  (duration: 119.498049ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-16T04:06:25.891427Z","caller":"traceutil/trace.go:171","msg":"trace[1609043098] transaction","detail":"{read_only:false; response_revision:1656; number_of_response:1; }","duration":"288.87465ms","start":"2024-01-16T04:06:25.602532Z","end":"2024-01-16T04:06:25.891406Z","steps":["trace[1609043098] 'process raft request'  (duration: 288.085675ms)"],"step_count":1}
	
	
	==> kernel <==
	 04:06:30 up 22 min,  0 users,  load average: 0.08, 0.25, 0.24
	Linux default-k8s-diff-port-434445 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [f9861ff0fbab72660648bfcb49b82810bada99f73a55cb0aa359ba114825b8f3] <==
	I0116 04:03:37.485040       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0116 04:04:37.485663       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0116 04:04:37.697173       1 handler_proxy.go:93] no RequestInfo found in the context
	E0116 04:04:37.697309       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0116 04:04:37.697828       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0116 04:04:38.697883       1 handler_proxy.go:93] no RequestInfo found in the context
	E0116 04:04:38.697994       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0116 04:04:38.698011       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0116 04:04:38.698264       1 handler_proxy.go:93] no RequestInfo found in the context
	E0116 04:04:38.698388       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0116 04:04:38.699649       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0116 04:05:19.949768       1 trace.go:236] Trace[1573678266]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/192.168.50.236,type:*v1.Endpoints,resource:apiServerIPInfo (16-Jan-2024 04:05:19.018) (total time: 931ms):
	Trace[1573678266]: ---"Transaction prepared" 391ms (04:05:19.410)
	Trace[1573678266]: ---"Txn call completed" 538ms (04:05:19.949)
	Trace[1573678266]: [931.503193ms] [931.503193ms] END
	I0116 04:05:37.484815       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0116 04:05:38.699277       1 handler_proxy.go:93] no RequestInfo found in the context
	E0116 04:05:38.699514       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0116 04:05:38.699610       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0116 04:05:38.699807       1 handler_proxy.go:93] no RequestInfo found in the context
	E0116 04:05:38.699873       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0116 04:05:38.701412       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [1438a3832328a03b94c3d408a2e332c0fb0adab915a9d4f26994e84c0509fac9] <==
	I0116 04:01:04.258318       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="94.299µs"
	E0116 04:01:20.533860       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 04:01:21.030361       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 04:01:50.541418       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 04:01:51.039869       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 04:02:20.547974       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 04:02:21.049019       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 04:02:50.555409       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 04:02:51.057220       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 04:03:20.561002       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 04:03:21.066671       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 04:03:50.567611       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 04:03:51.079273       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 04:04:20.579814       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 04:04:21.088250       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 04:04:50.587738       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 04:04:51.107986       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 04:05:20.594023       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 04:05:21.119910       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 04:05:50.601548       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 04:05:51.130905       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0116 04:06:01.266679       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="1.764959ms"
	I0116 04:06:16.257331       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="268.782µs"
	E0116 04:06:20.608892       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 04:06:21.142167       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [44f71a7069827d97c7326cbd36638ec36ff39cbbe0a1e4e61e7c010a38d2e123] <==
	I0116 03:44:39.911464       1 server_others.go:69] "Using iptables proxy"
	I0116 03:44:39.930615       1 node.go:141] Successfully retrieved node IP: 192.168.50.236
	I0116 03:44:39.998054       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0116 03:44:39.998176       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0116 03:44:40.001390       1 server_others.go:152] "Using iptables Proxier"
	I0116 03:44:40.001470       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0116 03:44:40.001738       1 server.go:846] "Version info" version="v1.28.4"
	I0116 03:44:40.001801       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0116 03:44:40.003432       1 config.go:188] "Starting service config controller"
	I0116 03:44:40.003496       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0116 03:44:40.003525       1 config.go:97] "Starting endpoint slice config controller"
	I0116 03:44:40.003531       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0116 03:44:40.004460       1 config.go:315] "Starting node config controller"
	I0116 03:44:40.004594       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0116 03:44:40.104217       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0116 03:44:40.104331       1 shared_informer.go:318] Caches are synced for service config
	I0116 03:44:40.104720       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [e60387e0e2800d67c99eb3566f31ad675c0db6c22379fc28ee9a447af5f5023c] <==
	I0116 03:44:35.275580       1 serving.go:348] Generated self-signed cert in-memory
	W0116 03:44:37.553754       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0116 03:44:37.553881       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0116 03:44:37.554028       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0116 03:44:37.554059       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0116 03:44:37.677921       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0116 03:44:37.678214       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0116 03:44:37.682808       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0116 03:44:37.682914       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0116 03:44:37.682948       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0116 03:44:37.682979       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0116 03:44:37.783825       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-01-16 03:44:00 UTC, ends at Tue 2024-01-16 04:06:31 UTC. --
	Jan 16 04:03:55 default-k8s-diff-port-434445 kubelet[939]: E0116 04:03:55.239509     939 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-894n2" podUID="46e4892a-d026-4a9d-88bc-128e92848808"
	Jan 16 04:04:06 default-k8s-diff-port-434445 kubelet[939]: E0116 04:04:06.240339     939 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-894n2" podUID="46e4892a-d026-4a9d-88bc-128e92848808"
	Jan 16 04:04:21 default-k8s-diff-port-434445 kubelet[939]: E0116 04:04:21.241205     939 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-894n2" podUID="46e4892a-d026-4a9d-88bc-128e92848808"
	Jan 16 04:04:31 default-k8s-diff-port-434445 kubelet[939]: E0116 04:04:31.279313     939 container_manager_linux.go:514] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service"
	Jan 16 04:04:31 default-k8s-diff-port-434445 kubelet[939]: E0116 04:04:31.283609     939 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 16 04:04:31 default-k8s-diff-port-434445 kubelet[939]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 16 04:04:31 default-k8s-diff-port-434445 kubelet[939]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 16 04:04:31 default-k8s-diff-port-434445 kubelet[939]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 16 04:04:35 default-k8s-diff-port-434445 kubelet[939]: E0116 04:04:35.240339     939 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-894n2" podUID="46e4892a-d026-4a9d-88bc-128e92848808"
	Jan 16 04:04:47 default-k8s-diff-port-434445 kubelet[939]: E0116 04:04:47.240873     939 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-894n2" podUID="46e4892a-d026-4a9d-88bc-128e92848808"
	Jan 16 04:05:00 default-k8s-diff-port-434445 kubelet[939]: E0116 04:05:00.240553     939 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-894n2" podUID="46e4892a-d026-4a9d-88bc-128e92848808"
	Jan 16 04:05:12 default-k8s-diff-port-434445 kubelet[939]: E0116 04:05:12.239000     939 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-894n2" podUID="46e4892a-d026-4a9d-88bc-128e92848808"
	Jan 16 04:05:27 default-k8s-diff-port-434445 kubelet[939]: E0116 04:05:27.239451     939 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-894n2" podUID="46e4892a-d026-4a9d-88bc-128e92848808"
	Jan 16 04:05:31 default-k8s-diff-port-434445 kubelet[939]: E0116 04:05:31.281523     939 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 16 04:05:31 default-k8s-diff-port-434445 kubelet[939]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 16 04:05:31 default-k8s-diff-port-434445 kubelet[939]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 16 04:05:31 default-k8s-diff-port-434445 kubelet[939]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 16 04:05:38 default-k8s-diff-port-434445 kubelet[939]: E0116 04:05:38.239744     939 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-894n2" podUID="46e4892a-d026-4a9d-88bc-128e92848808"
	Jan 16 04:05:50 default-k8s-diff-port-434445 kubelet[939]: E0116 04:05:50.253374     939 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 16 04:05:50 default-k8s-diff-port-434445 kubelet[939]: E0116 04:05:50.253428     939 kuberuntime_image.go:53] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 16 04:05:50 default-k8s-diff-port-434445 kubelet[939]: E0116 04:05:50.253661     939 kuberuntime_manager.go:1261] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-qtgxf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:
&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessag
ePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-894n2_kube-system(46e4892a-d026-4a9d-88bc-128e92848808): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 16 04:05:50 default-k8s-diff-port-434445 kubelet[939]: E0116 04:05:50.253770     939 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-894n2" podUID="46e4892a-d026-4a9d-88bc-128e92848808"
	Jan 16 04:06:01 default-k8s-diff-port-434445 kubelet[939]: E0116 04:06:01.239599     939 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-894n2" podUID="46e4892a-d026-4a9d-88bc-128e92848808"
	Jan 16 04:06:16 default-k8s-diff-port-434445 kubelet[939]: E0116 04:06:16.239128     939 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-894n2" podUID="46e4892a-d026-4a9d-88bc-128e92848808"
	Jan 16 04:06:29 default-k8s-diff-port-434445 kubelet[939]: E0116 04:06:29.240647     939 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-894n2" podUID="46e4892a-d026-4a9d-88bc-128e92848808"
	
	
	==> storage-provisioner [33ba3a03d878abc86a194defd715bb61a0066e49063e3823002bd3ec421da138] <==
	I0116 03:44:41.298709       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0116 03:44:41.311840       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0116 03:44:41.312012       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0116 03:44:58.807224       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0116 03:44:58.807595       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1bb524f8-0322-4186-a5b5-937d8bcb583c", APIVersion:"v1", ResourceVersion:"590", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-434445_c7c2d6f2-5b1f-4148-aca5-112744344eb7 became leader
	I0116 03:44:58.808818       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-434445_c7c2d6f2-5b1f-4148-aca5-112744344eb7!
	I0116 03:44:58.910191       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-434445_c7c2d6f2-5b1f-4148-aca5-112744344eb7!
	
	
	==> storage-provisioner [a4b27881ef90cea325e8f306fac88a76052eec15a693c291288664b8f4ebcc2a] <==
	I0116 03:44:39.707649       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0116 03:44:39.754443       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-434445 -n default-k8s-diff-port-434445
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-434445 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-894n2
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-434445 describe pod metrics-server-57f55c9bc5-894n2
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-434445 describe pod metrics-server-57f55c9bc5-894n2: exit status 1 (98.722698ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-894n2" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-434445 describe pod metrics-server-57f55c9bc5-894n2: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (503.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (308.45s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0116 03:59:18.183385  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/ingress-addon-legacy-873808/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-615980 -n embed-certs-615980
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-01-16 04:04:08.88454028 +0000 UTC m=+5392.822762902
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-615980 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-615980 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.266µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-615980 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-615980 -n embed-certs-615980
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-615980 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-615980 logs -n 25: (1.590739345s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p embed-certs-615980                                  | embed-certs-615980           | jenkins | v1.32.0 | 16 Jan 24 03:34 UTC | 16 Jan 24 03:35 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-690771                              | cert-expiration-690771       | jenkins | v1.32.0 | 16 Jan 24 03:35 UTC | 16 Jan 24 03:36 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-615980            | embed-certs-615980           | jenkins | v1.32.0 | 16 Jan 24 03:35 UTC | 16 Jan 24 03:35 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-615980                                  | embed-certs-615980           | jenkins | v1.32.0 | 16 Jan 24 03:35 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-666547             | no-preload-666547            | jenkins | v1.32.0 | 16 Jan 24 03:35 UTC | 16 Jan 24 03:35 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-666547                                   | no-preload-666547            | jenkins | v1.32.0 | 16 Jan 24 03:35 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-696770        | old-k8s-version-696770       | jenkins | v1.32.0 | 16 Jan 24 03:36 UTC | 16 Jan 24 03:36 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-696770                              | old-k8s-version-696770       | jenkins | v1.32.0 | 16 Jan 24 03:36 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-690771                              | cert-expiration-690771       | jenkins | v1.32.0 | 16 Jan 24 03:36 UTC | 16 Jan 24 03:36 UTC |
	| delete  | -p                                                     | disable-driver-mounts-673948 | jenkins | v1.32.0 | 16 Jan 24 03:36 UTC | 16 Jan 24 03:36 UTC |
	|         | disable-driver-mounts-673948                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-434445 | jenkins | v1.32.0 | 16 Jan 24 03:36 UTC | 16 Jan 24 03:37 UTC |
	|         | default-k8s-diff-port-434445                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-434445  | default-k8s-diff-port-434445 | jenkins | v1.32.0 | 16 Jan 24 03:37 UTC | 16 Jan 24 03:37 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-434445 | jenkins | v1.32.0 | 16 Jan 24 03:37 UTC |                     |
	|         | default-k8s-diff-port-434445                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-615980                 | embed-certs-615980           | jenkins | v1.32.0 | 16 Jan 24 03:38 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-666547                  | no-preload-666547            | jenkins | v1.32.0 | 16 Jan 24 03:38 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-615980                                  | embed-certs-615980           | jenkins | v1.32.0 | 16 Jan 24 03:38 UTC | 16 Jan 24 03:49 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p no-preload-666547                                   | no-preload-666547            | jenkins | v1.32.0 | 16 Jan 24 03:38 UTC | 16 Jan 24 03:48 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-696770             | old-k8s-version-696770       | jenkins | v1.32.0 | 16 Jan 24 03:38 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-696770                              | old-k8s-version-696770       | jenkins | v1.32.0 | 16 Jan 24 03:38 UTC | 16 Jan 24 03:51 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-434445       | default-k8s-diff-port-434445 | jenkins | v1.32.0 | 16 Jan 24 03:40 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-434445 | jenkins | v1.32.0 | 16 Jan 24 03:40 UTC | 16 Jan 24 03:49 UTC |
	|         | default-k8s-diff-port-434445                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-696770                              | old-k8s-version-696770       | jenkins | v1.32.0 | 16 Jan 24 04:03 UTC | 16 Jan 24 04:03 UTC |
	| start   | -p newest-cni-889166 --memory=2200 --alsologtostderr   | newest-cni-889166            | jenkins | v1.32.0 | 16 Jan 24 04:03 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p no-preload-666547                                   | no-preload-666547            | jenkins | v1.32.0 | 16 Jan 24 04:03 UTC | 16 Jan 24 04:03 UTC |
	| start   | -p auto-087557 --memory=3072                           | auto-087557                  | jenkins | v1.32.0 | 16 Jan 24 04:03 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/16 04:03:50
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0116 04:03:50.186521  512926 out.go:296] Setting OutFile to fd 1 ...
	I0116 04:03:50.186808  512926 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 04:03:50.186818  512926 out.go:309] Setting ErrFile to fd 2...
	I0116 04:03:50.186823  512926 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 04:03:50.187071  512926 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17965-468241/.minikube/bin
	I0116 04:03:50.187732  512926 out.go:303] Setting JSON to false
	I0116 04:03:50.188867  512926 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":17182,"bootTime":1705360648,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0116 04:03:50.188939  512926 start.go:138] virtualization: kvm guest
	I0116 04:03:50.191457  512926 out.go:177] * [auto-087557] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0116 04:03:50.192885  512926 out.go:177]   - MINIKUBE_LOCATION=17965
	I0116 04:03:50.192901  512926 notify.go:220] Checking for updates...
	I0116 04:03:50.194298  512926 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 04:03:50.196078  512926 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17965-468241/kubeconfig
	I0116 04:03:50.197827  512926 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17965-468241/.minikube
	I0116 04:03:50.199405  512926 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0116 04:03:50.201068  512926 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0116 04:03:50.203372  512926 config.go:182] Loaded profile config "default-k8s-diff-port-434445": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 04:03:50.203497  512926 config.go:182] Loaded profile config "embed-certs-615980": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 04:03:50.203592  512926 config.go:182] Loaded profile config "newest-cni-889166": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0116 04:03:50.203675  512926 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 04:03:50.244243  512926 out.go:177] * Using the kvm2 driver based on user configuration
	I0116 04:03:50.245609  512926 start.go:298] selected driver: kvm2
	I0116 04:03:50.245639  512926 start.go:902] validating driver "kvm2" against <nil>
	I0116 04:03:50.245651  512926 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0116 04:03:50.246693  512926 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 04:03:50.246806  512926 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17965-468241/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0116 04:03:50.263454  512926 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0116 04:03:50.263530  512926 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0116 04:03:50.263809  512926 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0116 04:03:50.263859  512926 cni.go:84] Creating CNI manager for ""
	I0116 04:03:50.263874  512926 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 04:03:50.263888  512926 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0116 04:03:50.263897  512926 start_flags.go:321] config:
	{Name:auto-087557 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:auto-087557 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 04:03:50.264052  512926 iso.go:125] acquiring lock: {Name:mk2f3231f0eeeb23816ac363851489181398d1c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 04:03:50.266449  512926 out.go:177] * Starting control plane node auto-087557 in cluster auto-087557
	I0116 04:03:51.977269  512568 main.go:141] libmachine: (newest-cni-889166) DBG | domain newest-cni-889166 has defined MAC address 52:54:00:0d:88:aa in network mk-newest-cni-889166
	I0116 04:03:51.977861  512568 main.go:141] libmachine: (newest-cni-889166) DBG | unable to find current IP address of domain newest-cni-889166 in network mk-newest-cni-889166
	I0116 04:03:51.977885  512568 main.go:141] libmachine: (newest-cni-889166) DBG | I0116 04:03:51.977829  512590 retry.go:31] will retry after 3.627211173s: waiting for machine to come up
	I0116 04:03:50.268193  512926 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 04:03:50.268254  512926 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0116 04:03:50.268268  512926 cache.go:56] Caching tarball of preloaded images
	I0116 04:03:50.268365  512926 preload.go:174] Found /home/jenkins/minikube-integration/17965-468241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0116 04:03:50.268381  512926 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0116 04:03:50.268512  512926 profile.go:148] Saving config to /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/auto-087557/config.json ...
	I0116 04:03:50.268538  512926 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/auto-087557/config.json: {Name:mke993804f0368c61b81876690163feec9a50721 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 04:03:50.268684  512926 start.go:365] acquiring machines lock for auto-087557: {Name:mk901e5fae8c1a578d989b520053c709bdbdcf06 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0116 04:03:57.258023  512926 start.go:369] acquired machines lock for "auto-087557" in 6.989293787s
	I0116 04:03:57.258093  512926 start.go:93] Provisioning new machine with config: &{Name:auto-087557 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.28.4 ClusterName:auto-087557 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 04:03:57.258235  512926 start.go:125] createHost starting for "" (driver="kvm2")
	I0116 04:03:55.606807  512568 main.go:141] libmachine: (newest-cni-889166) DBG | domain newest-cni-889166 has defined MAC address 52:54:00:0d:88:aa in network mk-newest-cni-889166
	I0116 04:03:55.607420  512568 main.go:141] libmachine: (newest-cni-889166) Found IP for machine: 192.168.61.174
	I0116 04:03:55.607466  512568 main.go:141] libmachine: (newest-cni-889166) DBG | domain newest-cni-889166 has current primary IP address 192.168.61.174 and MAC address 52:54:00:0d:88:aa in network mk-newest-cni-889166
	I0116 04:03:55.607480  512568 main.go:141] libmachine: (newest-cni-889166) Reserving static IP address...
	I0116 04:03:55.607855  512568 main.go:141] libmachine: (newest-cni-889166) DBG | unable to find host DHCP lease matching {name: "newest-cni-889166", mac: "52:54:00:0d:88:aa", ip: "192.168.61.174"} in network mk-newest-cni-889166
	I0116 04:03:55.691461  512568 main.go:141] libmachine: (newest-cni-889166) DBG | Getting to WaitForSSH function...
	I0116 04:03:55.691505  512568 main.go:141] libmachine: (newest-cni-889166) Reserved static IP address: 192.168.61.174
	I0116 04:03:55.691521  512568 main.go:141] libmachine: (newest-cni-889166) Waiting for SSH to be available...
	I0116 04:03:55.694758  512568 main.go:141] libmachine: (newest-cni-889166) DBG | domain newest-cni-889166 has defined MAC address 52:54:00:0d:88:aa in network mk-newest-cni-889166
	I0116 04:03:55.695207  512568 main.go:141] libmachine: (newest-cni-889166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:88:aa", ip: ""} in network mk-newest-cni-889166: {Iface:virbr3 ExpiryTime:2024-01-16 05:03:49 +0000 UTC Type:0 Mac:52:54:00:0d:88:aa Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:minikube Clientid:01:52:54:00:0d:88:aa}
	I0116 04:03:55.695243  512568 main.go:141] libmachine: (newest-cni-889166) DBG | domain newest-cni-889166 has defined IP address 192.168.61.174 and MAC address 52:54:00:0d:88:aa in network mk-newest-cni-889166
	I0116 04:03:55.695412  512568 main.go:141] libmachine: (newest-cni-889166) DBG | Using SSH client type: external
	I0116 04:03:55.695442  512568 main.go:141] libmachine: (newest-cni-889166) DBG | Using SSH private key: /home/jenkins/minikube-integration/17965-468241/.minikube/machines/newest-cni-889166/id_rsa (-rw-------)
	I0116 04:03:55.695483  512568 main.go:141] libmachine: (newest-cni-889166) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.174 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17965-468241/.minikube/machines/newest-cni-889166/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0116 04:03:55.695499  512568 main.go:141] libmachine: (newest-cni-889166) DBG | About to run SSH command:
	I0116 04:03:55.695528  512568 main.go:141] libmachine: (newest-cni-889166) DBG | exit 0
	I0116 04:03:55.787961  512568 main.go:141] libmachine: (newest-cni-889166) DBG | SSH cmd err, output: <nil>: 
	I0116 04:03:55.788279  512568 main.go:141] libmachine: (newest-cni-889166) KVM machine creation complete!
	I0116 04:03:55.788634  512568 main.go:141] libmachine: (newest-cni-889166) Calling .GetConfigRaw
	I0116 04:03:55.789199  512568 main.go:141] libmachine: (newest-cni-889166) Calling .DriverName
	I0116 04:03:55.789442  512568 main.go:141] libmachine: (newest-cni-889166) Calling .DriverName
	I0116 04:03:55.789617  512568 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0116 04:03:55.789634  512568 main.go:141] libmachine: (newest-cni-889166) Calling .GetState
	I0116 04:03:55.791056  512568 main.go:141] libmachine: Detecting operating system of created instance...
	I0116 04:03:55.791071  512568 main.go:141] libmachine: Waiting for SSH to be available...
	I0116 04:03:55.791078  512568 main.go:141] libmachine: Getting to WaitForSSH function...
	I0116 04:03:55.791085  512568 main.go:141] libmachine: (newest-cni-889166) Calling .GetSSHHostname
	I0116 04:03:55.793318  512568 main.go:141] libmachine: (newest-cni-889166) DBG | domain newest-cni-889166 has defined MAC address 52:54:00:0d:88:aa in network mk-newest-cni-889166
	I0116 04:03:55.793685  512568 main.go:141] libmachine: (newest-cni-889166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:88:aa", ip: ""} in network mk-newest-cni-889166: {Iface:virbr3 ExpiryTime:2024-01-16 05:03:49 +0000 UTC Type:0 Mac:52:54:00:0d:88:aa Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:newest-cni-889166 Clientid:01:52:54:00:0d:88:aa}
	I0116 04:03:55.793719  512568 main.go:141] libmachine: (newest-cni-889166) DBG | domain newest-cni-889166 has defined IP address 192.168.61.174 and MAC address 52:54:00:0d:88:aa in network mk-newest-cni-889166
	I0116 04:03:55.793836  512568 main.go:141] libmachine: (newest-cni-889166) Calling .GetSSHPort
	I0116 04:03:55.794059  512568 main.go:141] libmachine: (newest-cni-889166) Calling .GetSSHKeyPath
	I0116 04:03:55.794232  512568 main.go:141] libmachine: (newest-cni-889166) Calling .GetSSHKeyPath
	I0116 04:03:55.794390  512568 main.go:141] libmachine: (newest-cni-889166) Calling .GetSSHUsername
	I0116 04:03:55.794572  512568 main.go:141] libmachine: Using SSH client type: native
	I0116 04:03:55.795038  512568 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.174 22 <nil> <nil>}
	I0116 04:03:55.795054  512568 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0116 04:03:55.911350  512568 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 04:03:55.911409  512568 main.go:141] libmachine: Detecting the provisioner...
	I0116 04:03:55.911425  512568 main.go:141] libmachine: (newest-cni-889166) Calling .GetSSHHostname
	I0116 04:03:55.914655  512568 main.go:141] libmachine: (newest-cni-889166) DBG | domain newest-cni-889166 has defined MAC address 52:54:00:0d:88:aa in network mk-newest-cni-889166
	I0116 04:03:55.915052  512568 main.go:141] libmachine: (newest-cni-889166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:88:aa", ip: ""} in network mk-newest-cni-889166: {Iface:virbr3 ExpiryTime:2024-01-16 05:03:49 +0000 UTC Type:0 Mac:52:54:00:0d:88:aa Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:newest-cni-889166 Clientid:01:52:54:00:0d:88:aa}
	I0116 04:03:55.915086  512568 main.go:141] libmachine: (newest-cni-889166) DBG | domain newest-cni-889166 has defined IP address 192.168.61.174 and MAC address 52:54:00:0d:88:aa in network mk-newest-cni-889166
	I0116 04:03:55.915228  512568 main.go:141] libmachine: (newest-cni-889166) Calling .GetSSHPort
	I0116 04:03:55.915457  512568 main.go:141] libmachine: (newest-cni-889166) Calling .GetSSHKeyPath
	I0116 04:03:55.915644  512568 main.go:141] libmachine: (newest-cni-889166) Calling .GetSSHKeyPath
	I0116 04:03:55.915858  512568 main.go:141] libmachine: (newest-cni-889166) Calling .GetSSHUsername
	I0116 04:03:55.916023  512568 main.go:141] libmachine: Using SSH client type: native
	I0116 04:03:55.916395  512568 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.174 22 <nil> <nil>}
	I0116 04:03:55.916415  512568 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0116 04:03:56.033401  512568 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g19d536a-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0116 04:03:56.033488  512568 main.go:141] libmachine: found compatible host: buildroot
	I0116 04:03:56.033503  512568 main.go:141] libmachine: Provisioning with buildroot...
	I0116 04:03:56.033515  512568 main.go:141] libmachine: (newest-cni-889166) Calling .GetMachineName
	I0116 04:03:56.033845  512568 buildroot.go:166] provisioning hostname "newest-cni-889166"
	I0116 04:03:56.033881  512568 main.go:141] libmachine: (newest-cni-889166) Calling .GetMachineName
	I0116 04:03:56.034096  512568 main.go:141] libmachine: (newest-cni-889166) Calling .GetSSHHostname
	I0116 04:03:56.037158  512568 main.go:141] libmachine: (newest-cni-889166) DBG | domain newest-cni-889166 has defined MAC address 52:54:00:0d:88:aa in network mk-newest-cni-889166
	I0116 04:03:56.037581  512568 main.go:141] libmachine: (newest-cni-889166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:88:aa", ip: ""} in network mk-newest-cni-889166: {Iface:virbr3 ExpiryTime:2024-01-16 05:03:49 +0000 UTC Type:0 Mac:52:54:00:0d:88:aa Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:newest-cni-889166 Clientid:01:52:54:00:0d:88:aa}
	I0116 04:03:56.037625  512568 main.go:141] libmachine: (newest-cni-889166) DBG | domain newest-cni-889166 has defined IP address 192.168.61.174 and MAC address 52:54:00:0d:88:aa in network mk-newest-cni-889166
	I0116 04:03:56.037775  512568 main.go:141] libmachine: (newest-cni-889166) Calling .GetSSHPort
	I0116 04:03:56.037999  512568 main.go:141] libmachine: (newest-cni-889166) Calling .GetSSHKeyPath
	I0116 04:03:56.038174  512568 main.go:141] libmachine: (newest-cni-889166) Calling .GetSSHKeyPath
	I0116 04:03:56.038318  512568 main.go:141] libmachine: (newest-cni-889166) Calling .GetSSHUsername
	I0116 04:03:56.038470  512568 main.go:141] libmachine: Using SSH client type: native
	I0116 04:03:56.038831  512568 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.174 22 <nil> <nil>}
	I0116 04:03:56.038847  512568 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-889166 && echo "newest-cni-889166" | sudo tee /etc/hostname
	I0116 04:03:56.169604  512568 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-889166
	
	I0116 04:03:56.169632  512568 main.go:141] libmachine: (newest-cni-889166) Calling .GetSSHHostname
	I0116 04:03:56.172664  512568 main.go:141] libmachine: (newest-cni-889166) DBG | domain newest-cni-889166 has defined MAC address 52:54:00:0d:88:aa in network mk-newest-cni-889166
	I0116 04:03:56.173131  512568 main.go:141] libmachine: (newest-cni-889166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:88:aa", ip: ""} in network mk-newest-cni-889166: {Iface:virbr3 ExpiryTime:2024-01-16 05:03:49 +0000 UTC Type:0 Mac:52:54:00:0d:88:aa Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:newest-cni-889166 Clientid:01:52:54:00:0d:88:aa}
	I0116 04:03:56.173180  512568 main.go:141] libmachine: (newest-cni-889166) DBG | domain newest-cni-889166 has defined IP address 192.168.61.174 and MAC address 52:54:00:0d:88:aa in network mk-newest-cni-889166
	I0116 04:03:56.173326  512568 main.go:141] libmachine: (newest-cni-889166) Calling .GetSSHPort
	I0116 04:03:56.173567  512568 main.go:141] libmachine: (newest-cni-889166) Calling .GetSSHKeyPath
	I0116 04:03:56.173768  512568 main.go:141] libmachine: (newest-cni-889166) Calling .GetSSHKeyPath
	I0116 04:03:56.173929  512568 main.go:141] libmachine: (newest-cni-889166) Calling .GetSSHUsername
	I0116 04:03:56.174161  512568 main.go:141] libmachine: Using SSH client type: native
	I0116 04:03:56.174497  512568 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.174 22 <nil> <nil>}
	I0116 04:03:56.174531  512568 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-889166' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-889166/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-889166' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 04:03:56.301631  512568 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 04:03:56.301670  512568 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17965-468241/.minikube CaCertPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17965-468241/.minikube}
	I0116 04:03:56.301703  512568 buildroot.go:174] setting up certificates
	I0116 04:03:56.301714  512568 provision.go:83] configureAuth start
	I0116 04:03:56.301734  512568 main.go:141] libmachine: (newest-cni-889166) Calling .GetMachineName
	I0116 04:03:56.302126  512568 main.go:141] libmachine: (newest-cni-889166) Calling .GetIP
	I0116 04:03:56.305104  512568 main.go:141] libmachine: (newest-cni-889166) DBG | domain newest-cni-889166 has defined MAC address 52:54:00:0d:88:aa in network mk-newest-cni-889166
	I0116 04:03:56.305573  512568 main.go:141] libmachine: (newest-cni-889166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:88:aa", ip: ""} in network mk-newest-cni-889166: {Iface:virbr3 ExpiryTime:2024-01-16 05:03:49 +0000 UTC Type:0 Mac:52:54:00:0d:88:aa Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:newest-cni-889166 Clientid:01:52:54:00:0d:88:aa}
	I0116 04:03:56.305615  512568 main.go:141] libmachine: (newest-cni-889166) DBG | domain newest-cni-889166 has defined IP address 192.168.61.174 and MAC address 52:54:00:0d:88:aa in network mk-newest-cni-889166
	I0116 04:03:56.305715  512568 main.go:141] libmachine: (newest-cni-889166) Calling .GetSSHHostname
	I0116 04:03:56.308076  512568 main.go:141] libmachine: (newest-cni-889166) DBG | domain newest-cni-889166 has defined MAC address 52:54:00:0d:88:aa in network mk-newest-cni-889166
	I0116 04:03:56.308469  512568 main.go:141] libmachine: (newest-cni-889166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:88:aa", ip: ""} in network mk-newest-cni-889166: {Iface:virbr3 ExpiryTime:2024-01-16 05:03:49 +0000 UTC Type:0 Mac:52:54:00:0d:88:aa Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:newest-cni-889166 Clientid:01:52:54:00:0d:88:aa}
	I0116 04:03:56.308501  512568 main.go:141] libmachine: (newest-cni-889166) DBG | domain newest-cni-889166 has defined IP address 192.168.61.174 and MAC address 52:54:00:0d:88:aa in network mk-newest-cni-889166
	I0116 04:03:56.308667  512568 provision.go:138] copyHostCerts
	I0116 04:03:56.308736  512568 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem, removing ...
	I0116 04:03:56.308744  512568 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem
	I0116 04:03:56.308827  512568 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem (1078 bytes)
	I0116 04:03:56.308991  512568 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem, removing ...
	I0116 04:03:56.309004  512568 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem
	I0116 04:03:56.309036  512568 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem (1123 bytes)
	I0116 04:03:56.309126  512568 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem, removing ...
	I0116 04:03:56.309140  512568 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem
	I0116 04:03:56.309187  512568 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem (1679 bytes)
	I0116 04:03:56.309276  512568 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca-key.pem org=jenkins.newest-cni-889166 san=[192.168.61.174 192.168.61.174 localhost 127.0.0.1 minikube newest-cni-889166]
	I0116 04:03:56.462412  512568 provision.go:172] copyRemoteCerts
	I0116 04:03:56.462509  512568 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 04:03:56.462548  512568 main.go:141] libmachine: (newest-cni-889166) Calling .GetSSHHostname
	I0116 04:03:56.465649  512568 main.go:141] libmachine: (newest-cni-889166) DBG | domain newest-cni-889166 has defined MAC address 52:54:00:0d:88:aa in network mk-newest-cni-889166
	I0116 04:03:56.465975  512568 main.go:141] libmachine: (newest-cni-889166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:88:aa", ip: ""} in network mk-newest-cni-889166: {Iface:virbr3 ExpiryTime:2024-01-16 05:03:49 +0000 UTC Type:0 Mac:52:54:00:0d:88:aa Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:newest-cni-889166 Clientid:01:52:54:00:0d:88:aa}
	I0116 04:03:56.466024  512568 main.go:141] libmachine: (newest-cni-889166) DBG | domain newest-cni-889166 has defined IP address 192.168.61.174 and MAC address 52:54:00:0d:88:aa in network mk-newest-cni-889166
	I0116 04:03:56.466241  512568 main.go:141] libmachine: (newest-cni-889166) Calling .GetSSHPort
	I0116 04:03:56.466468  512568 main.go:141] libmachine: (newest-cni-889166) Calling .GetSSHKeyPath
	I0116 04:03:56.466694  512568 main.go:141] libmachine: (newest-cni-889166) Calling .GetSSHUsername
	I0116 04:03:56.466858  512568 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/newest-cni-889166/id_rsa Username:docker}
	I0116 04:03:56.554961  512568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0116 04:03:56.580672  512568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0116 04:03:56.606087  512568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0116 04:03:56.635238  512568 provision.go:86] duration metric: configureAuth took 333.503636ms
	I0116 04:03:56.635276  512568 buildroot.go:189] setting minikube options for container-runtime
	I0116 04:03:56.635495  512568 config.go:182] Loaded profile config "newest-cni-889166": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0116 04:03:56.635643  512568 main.go:141] libmachine: (newest-cni-889166) Calling .GetSSHHostname
	I0116 04:03:56.638682  512568 main.go:141] libmachine: (newest-cni-889166) DBG | domain newest-cni-889166 has defined MAC address 52:54:00:0d:88:aa in network mk-newest-cni-889166
	I0116 04:03:56.638979  512568 main.go:141] libmachine: (newest-cni-889166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:88:aa", ip: ""} in network mk-newest-cni-889166: {Iface:virbr3 ExpiryTime:2024-01-16 05:03:49 +0000 UTC Type:0 Mac:52:54:00:0d:88:aa Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:newest-cni-889166 Clientid:01:52:54:00:0d:88:aa}
	I0116 04:03:56.639009  512568 main.go:141] libmachine: (newest-cni-889166) DBG | domain newest-cni-889166 has defined IP address 192.168.61.174 and MAC address 52:54:00:0d:88:aa in network mk-newest-cni-889166
	I0116 04:03:56.639296  512568 main.go:141] libmachine: (newest-cni-889166) Calling .GetSSHPort
	I0116 04:03:56.639492  512568 main.go:141] libmachine: (newest-cni-889166) Calling .GetSSHKeyPath
	I0116 04:03:56.639622  512568 main.go:141] libmachine: (newest-cni-889166) Calling .GetSSHKeyPath
	I0116 04:03:56.639759  512568 main.go:141] libmachine: (newest-cni-889166) Calling .GetSSHUsername
	I0116 04:03:56.639903  512568 main.go:141] libmachine: Using SSH client type: native
	I0116 04:03:56.640348  512568 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.174 22 <nil> <nil>}
	I0116 04:03:56.640370  512568 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 04:03:56.986166  512568 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 04:03:56.986235  512568 main.go:141] libmachine: Checking connection to Docker...
	I0116 04:03:56.986246  512568 main.go:141] libmachine: (newest-cni-889166) Calling .GetURL
	I0116 04:03:56.987977  512568 main.go:141] libmachine: (newest-cni-889166) DBG | Using libvirt version 6000000
	I0116 04:03:56.990864  512568 main.go:141] libmachine: (newest-cni-889166) DBG | domain newest-cni-889166 has defined MAC address 52:54:00:0d:88:aa in network mk-newest-cni-889166
	I0116 04:03:56.991257  512568 main.go:141] libmachine: (newest-cni-889166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:88:aa", ip: ""} in network mk-newest-cni-889166: {Iface:virbr3 ExpiryTime:2024-01-16 05:03:49 +0000 UTC Type:0 Mac:52:54:00:0d:88:aa Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:newest-cni-889166 Clientid:01:52:54:00:0d:88:aa}
	I0116 04:03:56.991286  512568 main.go:141] libmachine: (newest-cni-889166) DBG | domain newest-cni-889166 has defined IP address 192.168.61.174 and MAC address 52:54:00:0d:88:aa in network mk-newest-cni-889166
	I0116 04:03:56.991493  512568 main.go:141] libmachine: Docker is up and running!
	I0116 04:03:56.991528  512568 main.go:141] libmachine: Reticulating splines...
	I0116 04:03:56.991537  512568 client.go:171] LocalClient.Create took 22.844960333s
	I0116 04:03:56.991562  512568 start.go:167] duration metric: libmachine.API.Create for "newest-cni-889166" took 22.845077852s
	I0116 04:03:56.991577  512568 start.go:300] post-start starting for "newest-cni-889166" (driver="kvm2")
	I0116 04:03:56.991594  512568 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 04:03:56.991620  512568 main.go:141] libmachine: (newest-cni-889166) Calling .DriverName
	I0116 04:03:56.991891  512568 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 04:03:56.991917  512568 main.go:141] libmachine: (newest-cni-889166) Calling .GetSSHHostname
	I0116 04:03:56.994294  512568 main.go:141] libmachine: (newest-cni-889166) DBG | domain newest-cni-889166 has defined MAC address 52:54:00:0d:88:aa in network mk-newest-cni-889166
	I0116 04:03:56.994715  512568 main.go:141] libmachine: (newest-cni-889166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:88:aa", ip: ""} in network mk-newest-cni-889166: {Iface:virbr3 ExpiryTime:2024-01-16 05:03:49 +0000 UTC Type:0 Mac:52:54:00:0d:88:aa Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:newest-cni-889166 Clientid:01:52:54:00:0d:88:aa}
	I0116 04:03:56.994745  512568 main.go:141] libmachine: (newest-cni-889166) DBG | domain newest-cni-889166 has defined IP address 192.168.61.174 and MAC address 52:54:00:0d:88:aa in network mk-newest-cni-889166
	I0116 04:03:56.994853  512568 main.go:141] libmachine: (newest-cni-889166) Calling .GetSSHPort
	I0116 04:03:56.995108  512568 main.go:141] libmachine: (newest-cni-889166) Calling .GetSSHKeyPath
	I0116 04:03:56.995305  512568 main.go:141] libmachine: (newest-cni-889166) Calling .GetSSHUsername
	I0116 04:03:56.995457  512568 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/newest-cni-889166/id_rsa Username:docker}
	I0116 04:03:57.087451  512568 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 04:03:57.092559  512568 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 04:03:57.092598  512568 filesync.go:126] Scanning /home/jenkins/minikube-integration/17965-468241/.minikube/addons for local assets ...
	I0116 04:03:57.092685  512568 filesync.go:126] Scanning /home/jenkins/minikube-integration/17965-468241/.minikube/files for local assets ...
	I0116 04:03:57.092796  512568 filesync.go:149] local asset: /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem -> 4754782.pem in /etc/ssl/certs
	I0116 04:03:57.092923  512568 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 04:03:57.103152  512568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem --> /etc/ssl/certs/4754782.pem (1708 bytes)
	I0116 04:03:57.127945  512568 start.go:303] post-start completed in 136.348364ms
	I0116 04:03:57.128007  512568 main.go:141] libmachine: (newest-cni-889166) Calling .GetConfigRaw
	I0116 04:03:57.128690  512568 main.go:141] libmachine: (newest-cni-889166) Calling .GetIP
	I0116 04:03:57.131675  512568 main.go:141] libmachine: (newest-cni-889166) DBG | domain newest-cni-889166 has defined MAC address 52:54:00:0d:88:aa in network mk-newest-cni-889166
	I0116 04:03:57.132105  512568 main.go:141] libmachine: (newest-cni-889166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:88:aa", ip: ""} in network mk-newest-cni-889166: {Iface:virbr3 ExpiryTime:2024-01-16 05:03:49 +0000 UTC Type:0 Mac:52:54:00:0d:88:aa Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:newest-cni-889166 Clientid:01:52:54:00:0d:88:aa}
	I0116 04:03:57.132139  512568 main.go:141] libmachine: (newest-cni-889166) DBG | domain newest-cni-889166 has defined IP address 192.168.61.174 and MAC address 52:54:00:0d:88:aa in network mk-newest-cni-889166
	I0116 04:03:57.132462  512568 profile.go:148] Saving config to /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/newest-cni-889166/config.json ...
	I0116 04:03:57.132684  512568 start.go:128] duration metric: createHost completed in 23.009570888s
	I0116 04:03:57.132711  512568 main.go:141] libmachine: (newest-cni-889166) Calling .GetSSHHostname
	I0116 04:03:57.135300  512568 main.go:141] libmachine: (newest-cni-889166) DBG | domain newest-cni-889166 has defined MAC address 52:54:00:0d:88:aa in network mk-newest-cni-889166
	I0116 04:03:57.135710  512568 main.go:141] libmachine: (newest-cni-889166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:88:aa", ip: ""} in network mk-newest-cni-889166: {Iface:virbr3 ExpiryTime:2024-01-16 05:03:49 +0000 UTC Type:0 Mac:52:54:00:0d:88:aa Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:newest-cni-889166 Clientid:01:52:54:00:0d:88:aa}
	I0116 04:03:57.135742  512568 main.go:141] libmachine: (newest-cni-889166) DBG | domain newest-cni-889166 has defined IP address 192.168.61.174 and MAC address 52:54:00:0d:88:aa in network mk-newest-cni-889166
	I0116 04:03:57.135930  512568 main.go:141] libmachine: (newest-cni-889166) Calling .GetSSHPort
	I0116 04:03:57.136193  512568 main.go:141] libmachine: (newest-cni-889166) Calling .GetSSHKeyPath
	I0116 04:03:57.136349  512568 main.go:141] libmachine: (newest-cni-889166) Calling .GetSSHKeyPath
	I0116 04:03:57.136472  512568 main.go:141] libmachine: (newest-cni-889166) Calling .GetSSHUsername
	I0116 04:03:57.136637  512568 main.go:141] libmachine: Using SSH client type: native
	I0116 04:03:57.137053  512568 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.174 22 <nil> <nil>}
	I0116 04:03:57.137069  512568 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0116 04:03:57.257834  512568 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705377837.235553953
	
	I0116 04:03:57.257868  512568 fix.go:206] guest clock: 1705377837.235553953
	I0116 04:03:57.257884  512568 fix.go:219] Guest: 2024-01-16 04:03:57.235553953 +0000 UTC Remote: 2024-01-16 04:03:57.132697051 +0000 UTC m=+23.155850574 (delta=102.856902ms)
	I0116 04:03:57.257909  512568 fix.go:190] guest clock delta is within tolerance: 102.856902ms
	I0116 04:03:57.257915  512568 start.go:83] releasing machines lock for "newest-cni-889166", held for 23.134909651s
	I0116 04:03:57.257938  512568 main.go:141] libmachine: (newest-cni-889166) Calling .DriverName
	I0116 04:03:57.258252  512568 main.go:141] libmachine: (newest-cni-889166) Calling .GetIP
	I0116 04:03:57.261551  512568 main.go:141] libmachine: (newest-cni-889166) DBG | domain newest-cni-889166 has defined MAC address 52:54:00:0d:88:aa in network mk-newest-cni-889166
	I0116 04:03:57.261932  512568 main.go:141] libmachine: (newest-cni-889166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:88:aa", ip: ""} in network mk-newest-cni-889166: {Iface:virbr3 ExpiryTime:2024-01-16 05:03:49 +0000 UTC Type:0 Mac:52:54:00:0d:88:aa Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:newest-cni-889166 Clientid:01:52:54:00:0d:88:aa}
	I0116 04:03:57.261966  512568 main.go:141] libmachine: (newest-cni-889166) DBG | domain newest-cni-889166 has defined IP address 192.168.61.174 and MAC address 52:54:00:0d:88:aa in network mk-newest-cni-889166
	I0116 04:03:57.262249  512568 main.go:141] libmachine: (newest-cni-889166) Calling .DriverName
	I0116 04:03:57.262973  512568 main.go:141] libmachine: (newest-cni-889166) Calling .DriverName
	I0116 04:03:57.263225  512568 main.go:141] libmachine: (newest-cni-889166) Calling .DriverName
	I0116 04:03:57.263363  512568 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 04:03:57.263416  512568 main.go:141] libmachine: (newest-cni-889166) Calling .GetSSHHostname
	I0116 04:03:57.263471  512568 ssh_runner.go:195] Run: cat /version.json
	I0116 04:03:57.263493  512568 main.go:141] libmachine: (newest-cni-889166) Calling .GetSSHHostname
	I0116 04:03:57.266422  512568 main.go:141] libmachine: (newest-cni-889166) DBG | domain newest-cni-889166 has defined MAC address 52:54:00:0d:88:aa in network mk-newest-cni-889166
	I0116 04:03:57.266647  512568 main.go:141] libmachine: (newest-cni-889166) DBG | domain newest-cni-889166 has defined MAC address 52:54:00:0d:88:aa in network mk-newest-cni-889166
	I0116 04:03:57.266863  512568 main.go:141] libmachine: (newest-cni-889166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:88:aa", ip: ""} in network mk-newest-cni-889166: {Iface:virbr3 ExpiryTime:2024-01-16 05:03:49 +0000 UTC Type:0 Mac:52:54:00:0d:88:aa Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:newest-cni-889166 Clientid:01:52:54:00:0d:88:aa}
	I0116 04:03:57.266893  512568 main.go:141] libmachine: (newest-cni-889166) DBG | domain newest-cni-889166 has defined IP address 192.168.61.174 and MAC address 52:54:00:0d:88:aa in network mk-newest-cni-889166
	I0116 04:03:57.267040  512568 main.go:141] libmachine: (newest-cni-889166) Calling .GetSSHPort
	I0116 04:03:57.267159  512568 main.go:141] libmachine: (newest-cni-889166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:88:aa", ip: ""} in network mk-newest-cni-889166: {Iface:virbr3 ExpiryTime:2024-01-16 05:03:49 +0000 UTC Type:0 Mac:52:54:00:0d:88:aa Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:newest-cni-889166 Clientid:01:52:54:00:0d:88:aa}
	I0116 04:03:57.267191  512568 main.go:141] libmachine: (newest-cni-889166) DBG | domain newest-cni-889166 has defined IP address 192.168.61.174 and MAC address 52:54:00:0d:88:aa in network mk-newest-cni-889166
	I0116 04:03:57.267251  512568 main.go:141] libmachine: (newest-cni-889166) Calling .GetSSHKeyPath
	I0116 04:03:57.267317  512568 main.go:141] libmachine: (newest-cni-889166) Calling .GetSSHPort
	I0116 04:03:57.267441  512568 main.go:141] libmachine: (newest-cni-889166) Calling .GetSSHUsername
	I0116 04:03:57.267529  512568 main.go:141] libmachine: (newest-cni-889166) Calling .GetSSHKeyPath
	I0116 04:03:57.267581  512568 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/newest-cni-889166/id_rsa Username:docker}
	I0116 04:03:57.267670  512568 main.go:141] libmachine: (newest-cni-889166) Calling .GetSSHUsername
	I0116 04:03:57.267817  512568 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/newest-cni-889166/id_rsa Username:docker}
	I0116 04:03:57.388900  512568 ssh_runner.go:195] Run: systemctl --version
	I0116 04:03:57.397612  512568 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 04:03:57.572687  512568 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0116 04:03:57.579648  512568 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 04:03:57.579725  512568 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 04:03:57.603948  512568 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0116 04:03:57.603983  512568 start.go:475] detecting cgroup driver to use...
	I0116 04:03:57.604088  512568 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 04:03:57.625461  512568 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 04:03:57.642099  512568 docker.go:217] disabling cri-docker service (if available) ...
	I0116 04:03:57.642190  512568 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 04:03:57.660217  512568 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 04:03:57.675465  512568 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 04:03:57.789973  512568 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 04:03:57.947145  512568 docker.go:233] disabling docker service ...
	I0116 04:03:57.947235  512568 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 04:03:57.964891  512568 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 04:03:57.978285  512568 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 04:03:58.121774  512568 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 04:03:58.282373  512568 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 04:03:58.297999  512568 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 04:03:58.316780  512568 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0116 04:03:58.316856  512568 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 04:03:58.326700  512568 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 04:03:58.326803  512568 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 04:03:58.337998  512568 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 04:03:58.348529  512568 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 04:03:58.359420  512568 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 04:03:58.372942  512568 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 04:03:58.384219  512568 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0116 04:03:58.384297  512568 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0116 04:03:58.399463  512568 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 04:03:58.410201  512568 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 04:03:58.544813  512568 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 04:03:58.740228  512568 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 04:03:58.740389  512568 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 04:03:58.746668  512568 start.go:543] Will wait 60s for crictl version
	I0116 04:03:58.746752  512568 ssh_runner.go:195] Run: which crictl
	I0116 04:03:58.751233  512568 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 04:03:58.805355  512568 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0116 04:03:58.805460  512568 ssh_runner.go:195] Run: crio --version
	I0116 04:03:58.856509  512568 ssh_runner.go:195] Run: crio --version
	I0116 04:03:58.916025  512568 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I0116 04:03:58.917397  512568 main.go:141] libmachine: (newest-cni-889166) Calling .GetIP
	I0116 04:03:58.920236  512568 main.go:141] libmachine: (newest-cni-889166) DBG | domain newest-cni-889166 has defined MAC address 52:54:00:0d:88:aa in network mk-newest-cni-889166
	I0116 04:03:58.920790  512568 main.go:141] libmachine: (newest-cni-889166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:88:aa", ip: ""} in network mk-newest-cni-889166: {Iface:virbr3 ExpiryTime:2024-01-16 05:03:49 +0000 UTC Type:0 Mac:52:54:00:0d:88:aa Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:newest-cni-889166 Clientid:01:52:54:00:0d:88:aa}
	I0116 04:03:58.920822  512568 main.go:141] libmachine: (newest-cni-889166) DBG | domain newest-cni-889166 has defined IP address 192.168.61.174 and MAC address 52:54:00:0d:88:aa in network mk-newest-cni-889166
	I0116 04:03:58.921150  512568 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0116 04:03:58.926221  512568 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 04:03:58.943933  512568 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0116 04:03:58.945618  512568 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0116 04:03:58.945696  512568 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 04:03:58.988154  512568 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0116 04:03:58.988247  512568 ssh_runner.go:195] Run: which lz4
	I0116 04:03:58.992608  512568 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0116 04:03:58.997378  512568 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0116 04:03:58.997421  512568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (401853962 bytes)
	I0116 04:03:57.260602  512926 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0116 04:03:57.260837  512926 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 04:03:57.260902  512926 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 04:03:57.280480  512926 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33741
	I0116 04:03:57.280969  512926 main.go:141] libmachine: () Calling .GetVersion
	I0116 04:03:57.281523  512926 main.go:141] libmachine: Using API Version  1
	I0116 04:03:57.281560  512926 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 04:03:57.282044  512926 main.go:141] libmachine: () Calling .GetMachineName
	I0116 04:03:57.282299  512926 main.go:141] libmachine: (auto-087557) Calling .GetMachineName
	I0116 04:03:57.282532  512926 main.go:141] libmachine: (auto-087557) Calling .DriverName
	I0116 04:03:57.282738  512926 start.go:159] libmachine.API.Create for "auto-087557" (driver="kvm2")
	I0116 04:03:57.282779  512926 client.go:168] LocalClient.Create starting
	I0116 04:03:57.282846  512926 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem
	I0116 04:03:57.282904  512926 main.go:141] libmachine: Decoding PEM data...
	I0116 04:03:57.282931  512926 main.go:141] libmachine: Parsing certificate...
	I0116 04:03:57.283369  512926 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem
	I0116 04:03:57.283449  512926 main.go:141] libmachine: Decoding PEM data...
	I0116 04:03:57.283472  512926 main.go:141] libmachine: Parsing certificate...
	I0116 04:03:57.283506  512926 main.go:141] libmachine: Running pre-create checks...
	I0116 04:03:57.283527  512926 main.go:141] libmachine: (auto-087557) Calling .PreCreateCheck
	I0116 04:03:57.285001  512926 main.go:141] libmachine: (auto-087557) Calling .GetConfigRaw
	I0116 04:03:57.285589  512926 main.go:141] libmachine: Creating machine...
	I0116 04:03:57.285611  512926 main.go:141] libmachine: (auto-087557) Calling .Create
	I0116 04:03:57.285785  512926 main.go:141] libmachine: (auto-087557) Creating KVM machine...
	I0116 04:03:57.287254  512926 main.go:141] libmachine: (auto-087557) DBG | found existing default KVM network
	I0116 04:03:57.289015  512926 main.go:141] libmachine: (auto-087557) DBG | I0116 04:03:57.288850  512986 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ad0}
	I0116 04:03:57.295404  512926 main.go:141] libmachine: (auto-087557) DBG | trying to create private KVM network mk-auto-087557 192.168.39.0/24...
	I0116 04:03:57.386319  512926 main.go:141] libmachine: (auto-087557) DBG | private KVM network mk-auto-087557 192.168.39.0/24 created
	I0116 04:03:57.386368  512926 main.go:141] libmachine: (auto-087557) Setting up store path in /home/jenkins/minikube-integration/17965-468241/.minikube/machines/auto-087557 ...
	I0116 04:03:57.386389  512926 main.go:141] libmachine: (auto-087557) DBG | I0116 04:03:57.386119  512986 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17965-468241/.minikube
	I0116 04:03:57.386451  512926 main.go:141] libmachine: (auto-087557) Building disk image from file:///home/jenkins/minikube-integration/17965-468241/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso
	I0116 04:03:57.386486  512926 main.go:141] libmachine: (auto-087557) Downloading /home/jenkins/minikube-integration/17965-468241/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17965-468241/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso...
	I0116 04:03:57.652362  512926 main.go:141] libmachine: (auto-087557) DBG | I0116 04:03:57.652162  512986 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17965-468241/.minikube/machines/auto-087557/id_rsa...
	I0116 04:03:57.813002  512926 main.go:141] libmachine: (auto-087557) DBG | I0116 04:03:57.812836  512986 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17965-468241/.minikube/machines/auto-087557/auto-087557.rawdisk...
	I0116 04:03:57.813052  512926 main.go:141] libmachine: (auto-087557) DBG | Writing magic tar header
	I0116 04:03:57.813106  512926 main.go:141] libmachine: (auto-087557) Setting executable bit set on /home/jenkins/minikube-integration/17965-468241/.minikube/machines/auto-087557 (perms=drwx------)
	I0116 04:03:57.813132  512926 main.go:141] libmachine: (auto-087557) Setting executable bit set on /home/jenkins/minikube-integration/17965-468241/.minikube/machines (perms=drwxr-xr-x)
	I0116 04:03:57.813144  512926 main.go:141] libmachine: (auto-087557) DBG | Writing SSH key tar header
	I0116 04:03:57.813179  512926 main.go:141] libmachine: (auto-087557) DBG | I0116 04:03:57.812953  512986 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17965-468241/.minikube/machines/auto-087557 ...
	I0116 04:03:57.813200  512926 main.go:141] libmachine: (auto-087557) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17965-468241/.minikube/machines/auto-087557
	I0116 04:03:57.813215  512926 main.go:141] libmachine: (auto-087557) Setting executable bit set on /home/jenkins/minikube-integration/17965-468241/.minikube (perms=drwxr-xr-x)
	I0116 04:03:57.813234  512926 main.go:141] libmachine: (auto-087557) Setting executable bit set on /home/jenkins/minikube-integration/17965-468241 (perms=drwxrwxr-x)
	I0116 04:03:57.813244  512926 main.go:141] libmachine: (auto-087557) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0116 04:03:57.813254  512926 main.go:141] libmachine: (auto-087557) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0116 04:03:57.813263  512926 main.go:141] libmachine: (auto-087557) Creating domain...
	I0116 04:03:57.813280  512926 main.go:141] libmachine: (auto-087557) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17965-468241/.minikube/machines
	I0116 04:03:57.813293  512926 main.go:141] libmachine: (auto-087557) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17965-468241/.minikube
	I0116 04:03:57.813336  512926 main.go:141] libmachine: (auto-087557) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17965-468241
	I0116 04:03:57.813365  512926 main.go:141] libmachine: (auto-087557) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0116 04:03:57.813378  512926 main.go:141] libmachine: (auto-087557) DBG | Checking permissions on dir: /home/jenkins
	I0116 04:03:57.813391  512926 main.go:141] libmachine: (auto-087557) DBG | Checking permissions on dir: /home
	I0116 04:03:57.813427  512926 main.go:141] libmachine: (auto-087557) DBG | Skipping /home - not owner
	I0116 04:03:57.814885  512926 main.go:141] libmachine: (auto-087557) define libvirt domain using xml: 
	I0116 04:03:57.814911  512926 main.go:141] libmachine: (auto-087557) <domain type='kvm'>
	I0116 04:03:57.816120  512926 main.go:141] libmachine: (auto-087557)   <name>auto-087557</name>
	I0116 04:03:57.816154  512926 main.go:141] libmachine: (auto-087557)   <memory unit='MiB'>3072</memory>
	I0116 04:03:57.816184  512926 main.go:141] libmachine: (auto-087557)   <vcpu>2</vcpu>
	I0116 04:03:57.816207  512926 main.go:141] libmachine: (auto-087557)   <features>
	I0116 04:03:57.816217  512926 main.go:141] libmachine: (auto-087557)     <acpi/>
	I0116 04:03:57.816226  512926 main.go:141] libmachine: (auto-087557)     <apic/>
	I0116 04:03:57.816240  512926 main.go:141] libmachine: (auto-087557)     <pae/>
	I0116 04:03:57.816255  512926 main.go:141] libmachine: (auto-087557)     
	I0116 04:03:57.816268  512926 main.go:141] libmachine: (auto-087557)   </features>
	I0116 04:03:57.816282  512926 main.go:141] libmachine: (auto-087557)   <cpu mode='host-passthrough'>
	I0116 04:03:57.816294  512926 main.go:141] libmachine: (auto-087557)   
	I0116 04:03:57.816307  512926 main.go:141] libmachine: (auto-087557)   </cpu>
	I0116 04:03:57.816319  512926 main.go:141] libmachine: (auto-087557)   <os>
	I0116 04:03:57.816337  512926 main.go:141] libmachine: (auto-087557)     <type>hvm</type>
	I0116 04:03:57.816349  512926 main.go:141] libmachine: (auto-087557)     <boot dev='cdrom'/>
	I0116 04:03:57.816361  512926 main.go:141] libmachine: (auto-087557)     <boot dev='hd'/>
	I0116 04:03:57.816377  512926 main.go:141] libmachine: (auto-087557)     <bootmenu enable='no'/>
	I0116 04:03:57.816390  512926 main.go:141] libmachine: (auto-087557)   </os>
	I0116 04:03:57.816402  512926 main.go:141] libmachine: (auto-087557)   <devices>
	I0116 04:03:57.816415  512926 main.go:141] libmachine: (auto-087557)     <disk type='file' device='cdrom'>
	I0116 04:03:57.816446  512926 main.go:141] libmachine: (auto-087557)       <source file='/home/jenkins/minikube-integration/17965-468241/.minikube/machines/auto-087557/boot2docker.iso'/>
	I0116 04:03:57.816467  512926 main.go:141] libmachine: (auto-087557)       <target dev='hdc' bus='scsi'/>
	I0116 04:03:57.816481  512926 main.go:141] libmachine: (auto-087557)       <readonly/>
	I0116 04:03:57.816493  512926 main.go:141] libmachine: (auto-087557)     </disk>
	I0116 04:03:57.816508  512926 main.go:141] libmachine: (auto-087557)     <disk type='file' device='disk'>
	I0116 04:03:57.816523  512926 main.go:141] libmachine: (auto-087557)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0116 04:03:57.816544  512926 main.go:141] libmachine: (auto-087557)       <source file='/home/jenkins/minikube-integration/17965-468241/.minikube/machines/auto-087557/auto-087557.rawdisk'/>
	I0116 04:03:57.816563  512926 main.go:141] libmachine: (auto-087557)       <target dev='hda' bus='virtio'/>
	I0116 04:03:57.816572  512926 main.go:141] libmachine: (auto-087557)     </disk>
	I0116 04:03:57.816582  512926 main.go:141] libmachine: (auto-087557)     <interface type='network'>
	I0116 04:03:57.816593  512926 main.go:141] libmachine: (auto-087557)       <source network='mk-auto-087557'/>
	I0116 04:03:57.816606  512926 main.go:141] libmachine: (auto-087557)       <model type='virtio'/>
	I0116 04:03:57.816619  512926 main.go:141] libmachine: (auto-087557)     </interface>
	I0116 04:03:57.816631  512926 main.go:141] libmachine: (auto-087557)     <interface type='network'>
	I0116 04:03:57.816645  512926 main.go:141] libmachine: (auto-087557)       <source network='default'/>
	I0116 04:03:57.816664  512926 main.go:141] libmachine: (auto-087557)       <model type='virtio'/>
	I0116 04:03:57.816677  512926 main.go:141] libmachine: (auto-087557)     </interface>
	I0116 04:03:57.816686  512926 main.go:141] libmachine: (auto-087557)     <serial type='pty'>
	I0116 04:03:57.816701  512926 main.go:141] libmachine: (auto-087557)       <target port='0'/>
	I0116 04:03:57.816713  512926 main.go:141] libmachine: (auto-087557)     </serial>
	I0116 04:03:57.816723  512926 main.go:141] libmachine: (auto-087557)     <console type='pty'>
	I0116 04:03:57.816736  512926 main.go:141] libmachine: (auto-087557)       <target type='serial' port='0'/>
	I0116 04:03:57.816749  512926 main.go:141] libmachine: (auto-087557)     </console>
	I0116 04:03:57.816761  512926 main.go:141] libmachine: (auto-087557)     <rng model='virtio'>
	I0116 04:03:57.816774  512926 main.go:141] libmachine: (auto-087557)       <backend model='random'>/dev/random</backend>
	I0116 04:03:57.816786  512926 main.go:141] libmachine: (auto-087557)     </rng>
	I0116 04:03:57.816795  512926 main.go:141] libmachine: (auto-087557)     
	I0116 04:03:57.816802  512926 main.go:141] libmachine: (auto-087557)     
	I0116 04:03:57.816811  512926 main.go:141] libmachine: (auto-087557)   </devices>
	I0116 04:03:57.816822  512926 main.go:141] libmachine: (auto-087557) </domain>
	I0116 04:03:57.816835  512926 main.go:141] libmachine: (auto-087557) 
	I0116 04:03:57.821617  512926 main.go:141] libmachine: (auto-087557) DBG | domain auto-087557 has defined MAC address 52:54:00:33:aa:d9 in network default
	I0116 04:03:57.822427  512926 main.go:141] libmachine: (auto-087557) Ensuring networks are active...
	I0116 04:03:57.822474  512926 main.go:141] libmachine: (auto-087557) DBG | domain auto-087557 has defined MAC address 52:54:00:28:eb:8c in network mk-auto-087557
	I0116 04:03:57.823409  512926 main.go:141] libmachine: (auto-087557) Ensuring network default is active
	I0116 04:03:57.823840  512926 main.go:141] libmachine: (auto-087557) Ensuring network mk-auto-087557 is active
	I0116 04:03:57.824687  512926 main.go:141] libmachine: (auto-087557) Getting domain xml...
	I0116 04:03:57.825707  512926 main.go:141] libmachine: (auto-087557) Creating domain...
	I0116 04:03:58.211074  512926 main.go:141] libmachine: (auto-087557) Waiting to get IP...
	I0116 04:03:58.211881  512926 main.go:141] libmachine: (auto-087557) DBG | domain auto-087557 has defined MAC address 52:54:00:28:eb:8c in network mk-auto-087557
	I0116 04:03:58.212368  512926 main.go:141] libmachine: (auto-087557) DBG | unable to find current IP address of domain auto-087557 in network mk-auto-087557
	I0116 04:03:58.212396  512926 main.go:141] libmachine: (auto-087557) DBG | I0116 04:03:58.212336  512986 retry.go:31] will retry after 257.548279ms: waiting for machine to come up
	I0116 04:03:58.471942  512926 main.go:141] libmachine: (auto-087557) DBG | domain auto-087557 has defined MAC address 52:54:00:28:eb:8c in network mk-auto-087557
	I0116 04:03:58.472581  512926 main.go:141] libmachine: (auto-087557) DBG | unable to find current IP address of domain auto-087557 in network mk-auto-087557
	I0116 04:03:58.472656  512926 main.go:141] libmachine: (auto-087557) DBG | I0116 04:03:58.472532  512986 retry.go:31] will retry after 260.585041ms: waiting for machine to come up
	I0116 04:03:58.735309  512926 main.go:141] libmachine: (auto-087557) DBG | domain auto-087557 has defined MAC address 52:54:00:28:eb:8c in network mk-auto-087557
	I0116 04:03:58.735907  512926 main.go:141] libmachine: (auto-087557) DBG | unable to find current IP address of domain auto-087557 in network mk-auto-087557
	I0116 04:03:58.735947  512926 main.go:141] libmachine: (auto-087557) DBG | I0116 04:03:58.735841  512986 retry.go:31] will retry after 323.991967ms: waiting for machine to come up
	I0116 04:03:59.061325  512926 main.go:141] libmachine: (auto-087557) DBG | domain auto-087557 has defined MAC address 52:54:00:28:eb:8c in network mk-auto-087557
	I0116 04:03:59.061883  512926 main.go:141] libmachine: (auto-087557) DBG | unable to find current IP address of domain auto-087557 in network mk-auto-087557
	I0116 04:03:59.061930  512926 main.go:141] libmachine: (auto-087557) DBG | I0116 04:03:59.061817  512986 retry.go:31] will retry after 367.184274ms: waiting for machine to come up
	I0116 04:03:59.430547  512926 main.go:141] libmachine: (auto-087557) DBG | domain auto-087557 has defined MAC address 52:54:00:28:eb:8c in network mk-auto-087557
	I0116 04:03:59.431103  512926 main.go:141] libmachine: (auto-087557) DBG | unable to find current IP address of domain auto-087557 in network mk-auto-087557
	I0116 04:03:59.431132  512926 main.go:141] libmachine: (auto-087557) DBG | I0116 04:03:59.431041  512986 retry.go:31] will retry after 711.1039ms: waiting for machine to come up
	I0116 04:04:00.143468  512926 main.go:141] libmachine: (auto-087557) DBG | domain auto-087557 has defined MAC address 52:54:00:28:eb:8c in network mk-auto-087557
	I0116 04:04:00.144001  512926 main.go:141] libmachine: (auto-087557) DBG | unable to find current IP address of domain auto-087557 in network mk-auto-087557
	I0116 04:04:00.144058  512926 main.go:141] libmachine: (auto-087557) DBG | I0116 04:04:00.143902  512986 retry.go:31] will retry after 747.895316ms: waiting for machine to come up
	I0116 04:04:00.691257  512568 crio.go:444] Took 1.698683 seconds to copy over tarball
	I0116 04:04:00.691335  512568 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0116 04:04:03.771566  512568 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.080189447s)
	I0116 04:04:03.771609  512568 crio.go:451] Took 3.080321 seconds to extract the tarball
	I0116 04:04:03.771656  512568 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0116 04:04:03.813475  512568 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 04:04:03.906516  512568 crio.go:496] all images are preloaded for cri-o runtime.
	I0116 04:04:03.906549  512568 cache_images.go:84] Images are preloaded, skipping loading
	I0116 04:04:03.906641  512568 ssh_runner.go:195] Run: crio config
	I0116 04:04:03.977374  512568 cni.go:84] Creating CNI manager for ""
	I0116 04:04:03.977406  512568 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 04:04:03.977436  512568 kubeadm.go:87] Using pod CIDR: 10.42.0.0/16
	I0116 04:04:03.977464  512568 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.61.174 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-889166 NodeName:newest-cni-889166 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.174"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureAr
gs:map[] NodeIP:192.168.61.174 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0116 04:04:03.977653  512568 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.174
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-889166"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.174
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.174"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 04:04:03.977763  512568 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-889166 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.174
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-889166 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0116 04:04:03.977835  512568 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0116 04:04:03.987345  512568 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 04:04:03.987455  512568 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0116 04:04:03.996917  512568 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (419 bytes)
	I0116 04:04:04.015874  512568 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0116 04:04:04.035003  512568 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2233 bytes)
	I0116 04:04:00.893194  512926 main.go:141] libmachine: (auto-087557) DBG | domain auto-087557 has defined MAC address 52:54:00:28:eb:8c in network mk-auto-087557
	I0116 04:04:00.893740  512926 main.go:141] libmachine: (auto-087557) DBG | unable to find current IP address of domain auto-087557 in network mk-auto-087557
	I0116 04:04:00.893771  512926 main.go:141] libmachine: (auto-087557) DBG | I0116 04:04:00.893705  512986 retry.go:31] will retry after 1.006543828s: waiting for machine to come up
	I0116 04:04:01.902298  512926 main.go:141] libmachine: (auto-087557) DBG | domain auto-087557 has defined MAC address 52:54:00:28:eb:8c in network mk-auto-087557
	I0116 04:04:01.902808  512926 main.go:141] libmachine: (auto-087557) DBG | unable to find current IP address of domain auto-087557 in network mk-auto-087557
	I0116 04:04:01.902840  512926 main.go:141] libmachine: (auto-087557) DBG | I0116 04:04:01.902747  512986 retry.go:31] will retry after 1.204463195s: waiting for machine to come up
	I0116 04:04:03.108949  512926 main.go:141] libmachine: (auto-087557) DBG | domain auto-087557 has defined MAC address 52:54:00:28:eb:8c in network mk-auto-087557
	I0116 04:04:03.109529  512926 main.go:141] libmachine: (auto-087557) DBG | unable to find current IP address of domain auto-087557 in network mk-auto-087557
	I0116 04:04:03.109565  512926 main.go:141] libmachine: (auto-087557) DBG | I0116 04:04:03.109481  512986 retry.go:31] will retry after 1.201030076s: waiting for machine to come up
	I0116 04:04:04.312297  512926 main.go:141] libmachine: (auto-087557) DBG | domain auto-087557 has defined MAC address 52:54:00:28:eb:8c in network mk-auto-087557
	I0116 04:04:04.312856  512926 main.go:141] libmachine: (auto-087557) DBG | unable to find current IP address of domain auto-087557 in network mk-auto-087557
	I0116 04:04:04.312889  512926 main.go:141] libmachine: (auto-087557) DBG | I0116 04:04:04.312790  512986 retry.go:31] will retry after 1.476002931s: waiting for machine to come up
	I0116 04:04:04.054753  512568 ssh_runner.go:195] Run: grep 192.168.61.174	control-plane.minikube.internal$ /etc/hosts
	I0116 04:04:04.133281  512568 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.174	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 04:04:04.149294  512568 certs.go:56] Setting up /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/newest-cni-889166 for IP: 192.168.61.174
	I0116 04:04:04.149354  512568 certs.go:190] acquiring lock for shared ca certs: {Name:mkab5aeea3e0ed69f3592dc8f418e07c6075b130 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 04:04:04.149571  512568 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17965-468241/.minikube/ca.key
	I0116 04:04:04.149628  512568 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.key
	I0116 04:04:04.149723  512568 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/newest-cni-889166/client.key
	I0116 04:04:04.149738  512568 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/newest-cni-889166/client.crt with IP's: []
	I0116 04:04:04.255735  512568 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/newest-cni-889166/client.crt ...
	I0116 04:04:04.255775  512568 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/newest-cni-889166/client.crt: {Name:mkd2810a0b7609ff3bed4a3aab687458066d9389 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 04:04:04.255986  512568 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/newest-cni-889166/client.key ...
	I0116 04:04:04.256006  512568 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/newest-cni-889166/client.key: {Name:mk2b85fd5c1553b2a1bb2a5b50467c8d1f8b2446 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 04:04:04.256157  512568 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/newest-cni-889166/apiserver.key.2382fec9
	I0116 04:04:04.256225  512568 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/newest-cni-889166/apiserver.crt.2382fec9 with IP's: [192.168.61.174 10.96.0.1 127.0.0.1 10.0.0.1]
	I0116 04:04:04.440635  512568 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/newest-cni-889166/apiserver.crt.2382fec9 ...
	I0116 04:04:04.440677  512568 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/newest-cni-889166/apiserver.crt.2382fec9: {Name:mk7cf8ee547b045d0621e8e0fa6f31c140fe0318 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 04:04:04.440844  512568 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/newest-cni-889166/apiserver.key.2382fec9 ...
	I0116 04:04:04.440862  512568 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/newest-cni-889166/apiserver.key.2382fec9: {Name:mk46701fa8fedb6533eb2c47165c6bbf7a6a5f4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 04:04:04.440929  512568 certs.go:337] copying /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/newest-cni-889166/apiserver.crt.2382fec9 -> /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/newest-cni-889166/apiserver.crt
	I0116 04:04:04.441041  512568 certs.go:341] copying /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/newest-cni-889166/apiserver.key.2382fec9 -> /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/newest-cni-889166/apiserver.key
	I0116 04:04:04.441096  512568 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/newest-cni-889166/proxy-client.key
	I0116 04:04:04.441113  512568 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/newest-cni-889166/proxy-client.crt with IP's: []
	I0116 04:04:04.653061  512568 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/newest-cni-889166/proxy-client.crt ...
	I0116 04:04:04.653098  512568 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/newest-cni-889166/proxy-client.crt: {Name:mkb81f81f370c234e5fa310c1542fc4c97917e96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 04:04:04.653274  512568 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/newest-cni-889166/proxy-client.key ...
	I0116 04:04:04.653288  512568 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/newest-cni-889166/proxy-client.key: {Name:mk0a563c13955a0716dd4d75fd6050481b859dc1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 04:04:04.653461  512568 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478.pem (1338 bytes)
	W0116 04:04:04.653502  512568 certs.go:433] ignoring /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478_empty.pem, impossibly tiny 0 bytes
	I0116 04:04:04.653514  512568 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca-key.pem (1679 bytes)
	I0116 04:04:04.653537  512568 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem (1078 bytes)
	I0116 04:04:04.653559  512568 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem (1123 bytes)
	I0116 04:04:04.653580  512568 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem (1679 bytes)
	I0116 04:04:04.653625  512568 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem (1708 bytes)
	I0116 04:04:04.654280  512568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/newest-cni-889166/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0116 04:04:04.685456  512568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/newest-cni-889166/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0116 04:04:04.711595  512568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/newest-cni-889166/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0116 04:04:04.738317  512568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/newest-cni-889166/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0116 04:04:04.767044  512568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 04:04:04.795433  512568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0116 04:04:04.827291  512568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 04:04:04.856993  512568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 04:04:04.885374  512568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 04:04:04.915884  512568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478.pem --> /usr/share/ca-certificates/475478.pem (1338 bytes)
	I0116 04:04:04.946439  512568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem --> /usr/share/ca-certificates/4754782.pem (1708 bytes)
	I0116 04:04:04.974394  512568 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0116 04:04:04.993012  512568 ssh_runner.go:195] Run: openssl version
	I0116 04:04:04.999619  512568 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 04:04:05.012022  512568 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 04:04:05.017711  512568 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 02:35 /usr/share/ca-certificates/minikubeCA.pem
	I0116 04:04:05.017782  512568 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 04:04:05.024338  512568 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 04:04:05.036158  512568 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/475478.pem && ln -fs /usr/share/ca-certificates/475478.pem /etc/ssl/certs/475478.pem"
	I0116 04:04:05.047684  512568 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/475478.pem
	I0116 04:04:05.054333  512568 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 02:44 /usr/share/ca-certificates/475478.pem
	I0116 04:04:05.054409  512568 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/475478.pem
	I0116 04:04:05.062132  512568 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/475478.pem /etc/ssl/certs/51391683.0"
	I0116 04:04:05.073376  512568 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4754782.pem && ln -fs /usr/share/ca-certificates/4754782.pem /etc/ssl/certs/4754782.pem"
	I0116 04:04:05.084734  512568 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4754782.pem
	I0116 04:04:05.089944  512568 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 02:44 /usr/share/ca-certificates/4754782.pem
	I0116 04:04:05.090029  512568 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4754782.pem
	I0116 04:04:05.096160  512568 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4754782.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 04:04:05.107393  512568 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 04:04:05.112148  512568 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0116 04:04:05.112223  512568 kubeadm.go:404] StartCluster: {Name:newest-cni-889166 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:newest-cni-889166 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.174 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jen
kins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 04:04:05.112357  512568 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0116 04:04:05.112416  512568 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 04:04:05.163005  512568 cri.go:89] found id: ""
	I0116 04:04:05.163076  512568 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0116 04:04:05.176025  512568 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 04:04:05.186422  512568 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 04:04:05.196312  512568 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 04:04:05.196389  512568 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0116 04:04:05.339773  512568 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.2
	I0116 04:04:05.339926  512568 kubeadm.go:322] [preflight] Running pre-flight checks
	I0116 04:04:05.641175  512568 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0116 04:04:05.641396  512568 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0116 04:04:05.641543  512568 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0116 04:04:05.921569  512568 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0116 04:04:05.923775  512568 out.go:204]   - Generating certificates and keys ...
	I0116 04:04:05.923892  512568 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0116 04:04:05.923983  512568 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0116 04:04:06.148269  512568 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0116 04:04:06.312252  512568 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0116 04:04:06.409603  512568 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0116 04:04:06.549854  512568 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0116 04:04:06.636307  512568 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0116 04:04:06.636685  512568 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-889166] and IPs [192.168.61.174 127.0.0.1 ::1]
	I0116 04:04:06.747781  512568 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0116 04:04:06.748241  512568 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-889166] and IPs [192.168.61.174 127.0.0.1 ::1]
	I0116 04:04:06.859515  512568 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0116 04:04:07.000243  512568 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0116 04:04:07.159964  512568 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0116 04:04:07.160230  512568 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0116 04:04:07.353858  512568 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0116 04:04:07.565850  512568 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0116 04:04:07.984659  512568 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0116 04:04:08.065958  512568 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0116 04:04:08.412295  512568 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0116 04:04:08.413175  512568 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0116 04:04:08.416672  512568 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0116 04:04:08.418726  512568 out.go:204]   - Booting up control plane ...
	I0116 04:04:08.418856  512568 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0116 04:04:08.418948  512568 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0116 04:04:08.420061  512568 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0116 04:04:08.438477  512568 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0116 04:04:08.441643  512568 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0116 04:04:08.442101  512568 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0116 04:04:08.628427  512568 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	
	
	==> CRI-O <==
	-- Journal begins at Tue 2024-01-16 03:44:23 UTC, ends at Tue 2024-01-16 04:04:09 UTC. --
	Jan 16 04:04:09 embed-certs-615980 crio[707]: time="2024-01-16 04:04:09.772454016Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705377849772439596,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=5ec416b6-e2a0-48fb-8fa8-ded398759d8e name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 04:04:09 embed-certs-615980 crio[707]: time="2024-01-16 04:04:09.773217307Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a59c4bc5-b406-4282-b414-7a9bdda065b7 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 04:04:09 embed-certs-615980 crio[707]: time="2024-01-16 04:04:09.773296486Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a59c4bc5-b406-4282-b414-7a9bdda065b7 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 04:04:09 embed-certs-615980 crio[707]: time="2024-01-16 04:04:09.773460420Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4cdf5e29ffaf0a16aac76782f118de80dd83ad4f7f8c86a00d36f2ca5059e03a,PodSandboxId:27e41ebc0cb158e4c4164f57a67968a4552ce3de1a9cc31a92e74ca580f7667d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705376996954006586,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ce752ad-ce91-462e-ab2b-2af64064eb40,},Annotations:map[string]string{io.kubernetes.container.hash: b9c2ee9a,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e486c660ebfdacd38f1ded1e9f9da19c21269bfec4834fd31aaaf2b6fe8677ca,PodSandboxId:c494e5883ed7f5b4cb9a5a65eea751f339dec149901ac7c06bc62272a1ae106a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705376996364176685,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8rkb5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 322fae38-3b29-4135-ba3f-c0ff8bda1e4a,},Annotations:map[string]string{io.kubernetes.container.hash: 66c29954,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c27b0553e4da0cf3906a0e8a9b50bf1f87dd0217e88a38218aebd11ea0de03fa,PodSandboxId:56da411a5eabd0d5daab31408ff9eee20050a8d5bf8f8b838bf543c9672ae3aa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705376995402642223,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-twbhh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9be49c16-f213-47da-83f4-90fc392eb49e,},Annotations:map[string]string{io.kubernetes.container.hash: 1f34606d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5ec49f0949e9c9671965ab44a04e11962756e944a0ae610596b3e8e8d214341,PodSandboxId:673e669acde408f6a431fa744e47eaf784b377c5b9395afa768ce18832f581c0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705376972471622013,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-615980,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: b185a766c563f6ce9043c8eda28f0d32,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:994d09ee15ce2df74bf9fd5ab55ee26cac0ce20a59cd56abc045ed57a6b95028,PodSandboxId:1ade641d0c13ab07984a9f499bd6af500af15c8a0f383e63a68e05e678e168f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705376972283696718,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-615980,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 9b9f0a8323872d7b759609d60ab95333,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:633707032a417c16dd3e1b01a25542bfafa7810c357c32cd6ecbabd906f016f4,PodSandboxId:d4251834ab94299dacdfaa61339efb08d308b1af1532f243d33472a613672211,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705376972331780188,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-615980,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: e0fd9681c69dd674b431c80253c522fa,},Annotations:map[string]string{io.kubernetes.container.hash: 8d1f459a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1fd9f0e356a8ef8c346cb8ec5851bbf72b404e7c440c94d2bef669a2056a16e,PodSandboxId:81a408a8901fc14eeaf95fd8236b20fe38b27dc4ba6d263626eee3a6d26a0149,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705376971984200087,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-615980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63ef129f8bda6acc00eb7303140250b
9,},Annotations:map[string]string{io.kubernetes.container.hash: 34e96305,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a59c4bc5-b406-4282-b414-7a9bdda065b7 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 04:04:09 embed-certs-615980 crio[707]: time="2024-01-16 04:04:09.819820151Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=00d198fd-6884-4bf1-9328-8411d66fef93 name=/runtime.v1.RuntimeService/Version
	Jan 16 04:04:09 embed-certs-615980 crio[707]: time="2024-01-16 04:04:09.819895747Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=00d198fd-6884-4bf1-9328-8411d66fef93 name=/runtime.v1.RuntimeService/Version
	Jan 16 04:04:09 embed-certs-615980 crio[707]: time="2024-01-16 04:04:09.821332648Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=ceffa94f-c360-4d78-944f-83b6dc67c1e2 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 04:04:09 embed-certs-615980 crio[707]: time="2024-01-16 04:04:09.821836317Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705377849821817875,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=ceffa94f-c360-4d78-944f-83b6dc67c1e2 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 04:04:09 embed-certs-615980 crio[707]: time="2024-01-16 04:04:09.822613972Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=8d4ac6f6-2917-4b6e-9f45-e009a8952ca5 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 04:04:09 embed-certs-615980 crio[707]: time="2024-01-16 04:04:09.822663365Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=8d4ac6f6-2917-4b6e-9f45-e009a8952ca5 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 04:04:09 embed-certs-615980 crio[707]: time="2024-01-16 04:04:09.822935032Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4cdf5e29ffaf0a16aac76782f118de80dd83ad4f7f8c86a00d36f2ca5059e03a,PodSandboxId:27e41ebc0cb158e4c4164f57a67968a4552ce3de1a9cc31a92e74ca580f7667d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705376996954006586,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ce752ad-ce91-462e-ab2b-2af64064eb40,},Annotations:map[string]string{io.kubernetes.container.hash: b9c2ee9a,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e486c660ebfdacd38f1ded1e9f9da19c21269bfec4834fd31aaaf2b6fe8677ca,PodSandboxId:c494e5883ed7f5b4cb9a5a65eea751f339dec149901ac7c06bc62272a1ae106a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705376996364176685,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8rkb5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 322fae38-3b29-4135-ba3f-c0ff8bda1e4a,},Annotations:map[string]string{io.kubernetes.container.hash: 66c29954,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c27b0553e4da0cf3906a0e8a9b50bf1f87dd0217e88a38218aebd11ea0de03fa,PodSandboxId:56da411a5eabd0d5daab31408ff9eee20050a8d5bf8f8b838bf543c9672ae3aa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705376995402642223,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-twbhh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9be49c16-f213-47da-83f4-90fc392eb49e,},Annotations:map[string]string{io.kubernetes.container.hash: 1f34606d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5ec49f0949e9c9671965ab44a04e11962756e944a0ae610596b3e8e8d214341,PodSandboxId:673e669acde408f6a431fa744e47eaf784b377c5b9395afa768ce18832f581c0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705376972471622013,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-615980,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: b185a766c563f6ce9043c8eda28f0d32,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:994d09ee15ce2df74bf9fd5ab55ee26cac0ce20a59cd56abc045ed57a6b95028,PodSandboxId:1ade641d0c13ab07984a9f499bd6af500af15c8a0f383e63a68e05e678e168f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705376972283696718,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-615980,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 9b9f0a8323872d7b759609d60ab95333,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:633707032a417c16dd3e1b01a25542bfafa7810c357c32cd6ecbabd906f016f4,PodSandboxId:d4251834ab94299dacdfaa61339efb08d308b1af1532f243d33472a613672211,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705376972331780188,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-615980,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: e0fd9681c69dd674b431c80253c522fa,},Annotations:map[string]string{io.kubernetes.container.hash: 8d1f459a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1fd9f0e356a8ef8c346cb8ec5851bbf72b404e7c440c94d2bef669a2056a16e,PodSandboxId:81a408a8901fc14eeaf95fd8236b20fe38b27dc4ba6d263626eee3a6d26a0149,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705376971984200087,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-615980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63ef129f8bda6acc00eb7303140250b
9,},Annotations:map[string]string{io.kubernetes.container.hash: 34e96305,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=8d4ac6f6-2917-4b6e-9f45-e009a8952ca5 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 04:04:09 embed-certs-615980 crio[707]: time="2024-01-16 04:04:09.877649131Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=ff49af17-4a34-412e-85c3-3cc525030daf name=/runtime.v1.RuntimeService/Version
	Jan 16 04:04:09 embed-certs-615980 crio[707]: time="2024-01-16 04:04:09.877716450Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=ff49af17-4a34-412e-85c3-3cc525030daf name=/runtime.v1.RuntimeService/Version
	Jan 16 04:04:09 embed-certs-615980 crio[707]: time="2024-01-16 04:04:09.880074319Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=dd8a4f9a-e9e7-4878-9a72-1c1064d57390 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 04:04:09 embed-certs-615980 crio[707]: time="2024-01-16 04:04:09.880751474Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705377849880724506,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=dd8a4f9a-e9e7-4878-9a72-1c1064d57390 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 04:04:09 embed-certs-615980 crio[707]: time="2024-01-16 04:04:09.881900842Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=92645d86-77c0-49de-b0b1-e537cf9a26f2 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 04:04:09 embed-certs-615980 crio[707]: time="2024-01-16 04:04:09.881986000Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=92645d86-77c0-49de-b0b1-e537cf9a26f2 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 04:04:09 embed-certs-615980 crio[707]: time="2024-01-16 04:04:09.882262030Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4cdf5e29ffaf0a16aac76782f118de80dd83ad4f7f8c86a00d36f2ca5059e03a,PodSandboxId:27e41ebc0cb158e4c4164f57a67968a4552ce3de1a9cc31a92e74ca580f7667d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705376996954006586,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ce752ad-ce91-462e-ab2b-2af64064eb40,},Annotations:map[string]string{io.kubernetes.container.hash: b9c2ee9a,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e486c660ebfdacd38f1ded1e9f9da19c21269bfec4834fd31aaaf2b6fe8677ca,PodSandboxId:c494e5883ed7f5b4cb9a5a65eea751f339dec149901ac7c06bc62272a1ae106a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705376996364176685,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8rkb5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 322fae38-3b29-4135-ba3f-c0ff8bda1e4a,},Annotations:map[string]string{io.kubernetes.container.hash: 66c29954,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c27b0553e4da0cf3906a0e8a9b50bf1f87dd0217e88a38218aebd11ea0de03fa,PodSandboxId:56da411a5eabd0d5daab31408ff9eee20050a8d5bf8f8b838bf543c9672ae3aa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705376995402642223,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-twbhh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9be49c16-f213-47da-83f4-90fc392eb49e,},Annotations:map[string]string{io.kubernetes.container.hash: 1f34606d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5ec49f0949e9c9671965ab44a04e11962756e944a0ae610596b3e8e8d214341,PodSandboxId:673e669acde408f6a431fa744e47eaf784b377c5b9395afa768ce18832f581c0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705376972471622013,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-615980,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: b185a766c563f6ce9043c8eda28f0d32,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:994d09ee15ce2df74bf9fd5ab55ee26cac0ce20a59cd56abc045ed57a6b95028,PodSandboxId:1ade641d0c13ab07984a9f499bd6af500af15c8a0f383e63a68e05e678e168f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705376972283696718,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-615980,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 9b9f0a8323872d7b759609d60ab95333,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:633707032a417c16dd3e1b01a25542bfafa7810c357c32cd6ecbabd906f016f4,PodSandboxId:d4251834ab94299dacdfaa61339efb08d308b1af1532f243d33472a613672211,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705376972331780188,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-615980,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: e0fd9681c69dd674b431c80253c522fa,},Annotations:map[string]string{io.kubernetes.container.hash: 8d1f459a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1fd9f0e356a8ef8c346cb8ec5851bbf72b404e7c440c94d2bef669a2056a16e,PodSandboxId:81a408a8901fc14eeaf95fd8236b20fe38b27dc4ba6d263626eee3a6d26a0149,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705376971984200087,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-615980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63ef129f8bda6acc00eb7303140250b
9,},Annotations:map[string]string{io.kubernetes.container.hash: 34e96305,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=92645d86-77c0-49de-b0b1-e537cf9a26f2 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 04:04:09 embed-certs-615980 crio[707]: time="2024-01-16 04:04:09.935883240Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=533f8d43-ed71-4883-b51c-3bf772d96eb3 name=/runtime.v1.RuntimeService/Version
	Jan 16 04:04:09 embed-certs-615980 crio[707]: time="2024-01-16 04:04:09.936005548Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=533f8d43-ed71-4883-b51c-3bf772d96eb3 name=/runtime.v1.RuntimeService/Version
	Jan 16 04:04:09 embed-certs-615980 crio[707]: time="2024-01-16 04:04:09.937971156Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=c02cdded-b741-4917-99ca-c77286999f8b name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 04:04:09 embed-certs-615980 crio[707]: time="2024-01-16 04:04:09.938517785Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705377849938500240,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=c02cdded-b741-4917-99ca-c77286999f8b name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 04:04:09 embed-certs-615980 crio[707]: time="2024-01-16 04:04:09.939745065Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1ce60953-fdea-448e-9d70-5a6e58cbcebc name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 04:04:09 embed-certs-615980 crio[707]: time="2024-01-16 04:04:09.939824186Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1ce60953-fdea-448e-9d70-5a6e58cbcebc name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 04:04:09 embed-certs-615980 crio[707]: time="2024-01-16 04:04:09.940021576Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4cdf5e29ffaf0a16aac76782f118de80dd83ad4f7f8c86a00d36f2ca5059e03a,PodSandboxId:27e41ebc0cb158e4c4164f57a67968a4552ce3de1a9cc31a92e74ca580f7667d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705376996954006586,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ce752ad-ce91-462e-ab2b-2af64064eb40,},Annotations:map[string]string{io.kubernetes.container.hash: b9c2ee9a,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e486c660ebfdacd38f1ded1e9f9da19c21269bfec4834fd31aaaf2b6fe8677ca,PodSandboxId:c494e5883ed7f5b4cb9a5a65eea751f339dec149901ac7c06bc62272a1ae106a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1705376996364176685,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8rkb5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 322fae38-3b29-4135-ba3f-c0ff8bda1e4a,},Annotations:map[string]string{io.kubernetes.container.hash: 66c29954,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c27b0553e4da0cf3906a0e8a9b50bf1f87dd0217e88a38218aebd11ea0de03fa,PodSandboxId:56da411a5eabd0d5daab31408ff9eee20050a8d5bf8f8b838bf543c9672ae3aa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1705376995402642223,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-twbhh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9be49c16-f213-47da-83f4-90fc392eb49e,},Annotations:map[string]string{io.kubernetes.container.hash: 1f34606d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5ec49f0949e9c9671965ab44a04e11962756e944a0ae610596b3e8e8d214341,PodSandboxId:673e669acde408f6a431fa744e47eaf784b377c5b9395afa768ce18832f581c0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1705376972471622013,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-615980,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: b185a766c563f6ce9043c8eda28f0d32,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:994d09ee15ce2df74bf9fd5ab55ee26cac0ce20a59cd56abc045ed57a6b95028,PodSandboxId:1ade641d0c13ab07984a9f499bd6af500af15c8a0f383e63a68e05e678e168f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1705376972283696718,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-615980,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 9b9f0a8323872d7b759609d60ab95333,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:633707032a417c16dd3e1b01a25542bfafa7810c357c32cd6ecbabd906f016f4,PodSandboxId:d4251834ab94299dacdfaa61339efb08d308b1af1532f243d33472a613672211,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1705376972331780188,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-615980,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: e0fd9681c69dd674b431c80253c522fa,},Annotations:map[string]string{io.kubernetes.container.hash: 8d1f459a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1fd9f0e356a8ef8c346cb8ec5851bbf72b404e7c440c94d2bef669a2056a16e,PodSandboxId:81a408a8901fc14eeaf95fd8236b20fe38b27dc4ba6d263626eee3a6d26a0149,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1705376971984200087,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-615980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63ef129f8bda6acc00eb7303140250b
9,},Annotations:map[string]string{io.kubernetes.container.hash: 34e96305,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1ce60953-fdea-448e-9d70-5a6e58cbcebc name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4cdf5e29ffaf0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 minutes ago      Running             storage-provisioner       0                   27e41ebc0cb15       storage-provisioner
	e486c660ebfda       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   14 minutes ago      Running             kube-proxy                0                   c494e5883ed7f       kube-proxy-8rkb5
	c27b0553e4da0       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   14 minutes ago      Running             coredns                   0                   56da411a5eabd       coredns-5dd5756b68-twbhh
	a5ec49f0949e9       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   14 minutes ago      Running             kube-controller-manager   2                   673e669acde40       kube-controller-manager-embed-certs-615980
	633707032a417       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   14 minutes ago      Running             kube-apiserver            2                   d4251834ab942       kube-apiserver-embed-certs-615980
	994d09ee15ce2       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   14 minutes ago      Running             kube-scheduler            2                   1ade641d0c13a       kube-scheduler-embed-certs-615980
	d1fd9f0e356a8       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   14 minutes ago      Running             etcd                      2                   81a408a8901fc       etcd-embed-certs-615980
	
	
	==> coredns [c27b0553e4da0cf3906a0e8a9b50bf1f87dd0217e88a38218aebd11ea0de03fa] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	[INFO] Reloading complete
	[INFO] 127.0.0.1:55277 - 62806 "HINFO IN 4324491712175631855.61519612609968798. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.010038653s
	
	
	==> describe nodes <==
	Name:               embed-certs-615980
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-615980
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e8fa5f64d0e7272be43ff25ed3826261f0a2578
	                    minikube.k8s.io/name=embed-certs-615980
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_16T03_49_40_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Jan 2024 03:49:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-615980
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Jan 2024 04:04:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Jan 2024 04:00:13 +0000   Tue, 16 Jan 2024 03:49:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Jan 2024 04:00:13 +0000   Tue, 16 Jan 2024 03:49:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Jan 2024 04:00:13 +0000   Tue, 16 Jan 2024 03:49:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Jan 2024 04:00:13 +0000   Tue, 16 Jan 2024 03:49:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.159
	  Hostname:    embed-certs-615980
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 37086489959547d8b681242750c5c6e3
	  System UUID:                37086489-9595-47d8-b681-242750c5c6e3
	  Boot ID:                    05dfe042-8a20-4cf5-b8c2-95e2790cd742
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-twbhh                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-embed-certs-615980                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kube-apiserver-embed-certs-615980             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-embed-certs-615980    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-8rkb5                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-embed-certs-615980             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 metrics-server-57f55c9bc5-fc7tx               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14m                kube-proxy       
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)  kubelet          Node embed-certs-615980 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)  kubelet          Node embed-certs-615980 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)  kubelet          Node embed-certs-615980 status is now: NodeHasSufficientPID
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  14m                kubelet          Node embed-certs-615980 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m                kubelet          Node embed-certs-615980 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m                kubelet          Node embed-certs-615980 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           14m                node-controller  Node embed-certs-615980 event: Registered Node embed-certs-615980 in Controller
	
	
	==> dmesg <==
	[Jan16 03:44] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.092142] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.208469] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.631130] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.174184] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.670348] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.305152] systemd-fstab-generator[633]: Ignoring "noauto" for root device
	[  +0.125134] systemd-fstab-generator[644]: Ignoring "noauto" for root device
	[  +0.183589] systemd-fstab-generator[657]: Ignoring "noauto" for root device
	[  +0.128491] systemd-fstab-generator[668]: Ignoring "noauto" for root device
	[  +0.251474] systemd-fstab-generator[692]: Ignoring "noauto" for root device
	[ +17.985200] systemd-fstab-generator[907]: Ignoring "noauto" for root device
	[Jan16 03:45] kauditd_printk_skb: 29 callbacks suppressed
	[Jan16 03:49] systemd-fstab-generator[3517]: Ignoring "noauto" for root device
	[  +9.828605] systemd-fstab-generator[3847]: Ignoring "noauto" for root device
	[ +12.901501] kauditd_printk_skb: 2 callbacks suppressed
	[Jan16 03:50] kauditd_printk_skb: 9 callbacks suppressed
	
	
	==> etcd [d1fd9f0e356a8ef8c346cb8ec5851bbf72b404e7c440c94d2bef669a2056a16e] <==
	{"level":"info","ts":"2024-01-16T03:49:34.479364Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d718283c8ba9c288 became candidate at term 2"}
	{"level":"info","ts":"2024-01-16T03:49:34.47937Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d718283c8ba9c288 received MsgVoteResp from d718283c8ba9c288 at term 2"}
	{"level":"info","ts":"2024-01-16T03:49:34.479378Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d718283c8ba9c288 became leader at term 2"}
	{"level":"info","ts":"2024-01-16T03:49:34.479385Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d718283c8ba9c288 elected leader d718283c8ba9c288 at term 2"}
	{"level":"info","ts":"2024-01-16T03:49:34.48425Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-16T03:49:34.488464Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"d718283c8ba9c288","local-member-attributes":"{Name:embed-certs-615980 ClientURLs:[https://192.168.72.159:2379]}","request-path":"/0/members/d718283c8ba9c288/attributes","cluster-id":"6f0e35e647fe17a2","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-16T03:49:34.488676Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-16T03:49:34.490097Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.159:2379"}
	{"level":"info","ts":"2024-01-16T03:49:34.490219Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f0e35e647fe17a2","local-member-id":"d718283c8ba9c288","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-16T03:49:34.490353Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-16T03:49:34.490416Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-16T03:49:34.490713Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-16T03:49:34.491728Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-16T03:49:34.512274Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-16T03:49:34.512368Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-16T03:59:34.582084Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":719}
	{"level":"info","ts":"2024-01-16T03:59:34.584479Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":719,"took":"1.892942ms","hash":4271228006}
	{"level":"info","ts":"2024-01-16T03:59:34.584632Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4271228006,"revision":719,"compact-revision":-1}
	{"level":"warn","ts":"2024-01-16T04:04:04.037424Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"192.598394ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/prioritylevelconfigurations/\" range_end:\"/registry/prioritylevelconfigurations0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-01-16T04:04:04.037994Z","caller":"traceutil/trace.go:171","msg":"trace[499230290] range","detail":"{range_begin:/registry/prioritylevelconfigurations/; range_end:/registry/prioritylevelconfigurations0; response_count:0; response_revision:1181; }","duration":"193.269602ms","start":"2024-01-16T04:04:03.844678Z","end":"2024-01-16T04:04:04.037948Z","steps":["trace[499230290] 'count revisions from in-memory index tree'  (duration: 192.490719ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-16T04:04:05.172039Z","caller":"traceutil/trace.go:171","msg":"trace[172245453] linearizableReadLoop","detail":"{readStateIndex:1369; appliedIndex:1368; }","duration":"156.152311ms","start":"2024-01-16T04:04:05.015844Z","end":"2024-01-16T04:04:05.171996Z","steps":["trace[172245453] 'read index received'  (duration: 155.865069ms)","trace[172245453] 'applied index is now lower than readState.Index'  (duration: 286.359µs)"],"step_count":2}
	{"level":"warn","ts":"2024-01-16T04:04:05.172359Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"156.490963ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-16T04:04:05.172487Z","caller":"traceutil/trace.go:171","msg":"trace[2095182588] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1182; }","duration":"156.644809ms","start":"2024-01-16T04:04:05.015816Z","end":"2024-01-16T04:04:05.172461Z","steps":["trace[2095182588] 'agreement among raft nodes before linearized reading'  (duration: 156.462747ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-16T04:04:05.172359Z","caller":"traceutil/trace.go:171","msg":"trace[1017499738] transaction","detail":"{read_only:false; response_revision:1182; number_of_response:1; }","duration":"316.796855ms","start":"2024-01-16T04:04:04.855544Z","end":"2024-01-16T04:04:05.172341Z","steps":["trace[1017499738] 'process raft request'  (duration: 316.246174ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T04:04:05.174789Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-16T04:04:04.855526Z","time spent":"317.277301ms","remote":"127.0.0.1:52484","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1103,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1180 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1030 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	
	
	==> kernel <==
	 04:04:10 up 19 min,  0 users,  load average: 0.11, 0.13, 0.15
	Linux embed-certs-615980 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [633707032a417c16dd3e1b01a25542bfafa7810c357c32cd6ecbabd906f016f4] <==
	W0116 03:59:37.692235       1 handler_proxy.go:93] no RequestInfo found in the context
	E0116 03:59:37.692299       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0116 03:59:37.692308       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0116 03:59:37.692350       1 handler_proxy.go:93] no RequestInfo found in the context
	E0116 03:59:37.692407       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0116 03:59:37.693694       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0116 04:00:36.561735       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0116 04:00:37.692671       1 handler_proxy.go:93] no RequestInfo found in the context
	E0116 04:00:37.692867       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0116 04:00:37.692913       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0116 04:00:37.693942       1 handler_proxy.go:93] no RequestInfo found in the context
	E0116 04:00:37.694054       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0116 04:00:37.694063       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0116 04:01:36.561768       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0116 04:02:36.561604       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0116 04:02:37.693753       1 handler_proxy.go:93] no RequestInfo found in the context
	E0116 04:02:37.693941       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0116 04:02:37.693995       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0116 04:02:37.695310       1 handler_proxy.go:93] no RequestInfo found in the context
	E0116 04:02:37.695496       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0116 04:02:37.695526       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0116 04:03:36.561791       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	
	==> kube-controller-manager [a5ec49f0949e9c9671965ab44a04e11962756e944a0ae610596b3e8e8d214341] <==
	I0116 03:58:22.753422       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 03:58:52.247772       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:58:52.769160       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 03:59:22.256642       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:59:22.781494       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 03:59:52.263743       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 03:59:52.795417       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 04:00:22.271176       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 04:00:22.806367       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 04:00:52.278666       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 04:00:52.816437       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0116 04:00:54.474952       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="431.606µs"
	I0116 04:01:05.474867       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="281.466µs"
	E0116 04:01:22.285534       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 04:01:22.826705       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 04:01:52.292994       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 04:01:52.840650       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 04:02:22.299105       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 04:02:22.850433       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 04:02:52.306439       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 04:02:52.861752       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 04:03:22.313485       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 04:03:22.872877       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0116 04:03:52.320849       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0116 04:03:52.882547       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [e486c660ebfdacd38f1ded1e9f9da19c21269bfec4834fd31aaaf2b6fe8677ca] <==
	I0116 03:49:57.307049       1 server_others.go:69] "Using iptables proxy"
	I0116 03:49:57.330887       1 node.go:141] Successfully retrieved node IP: 192.168.72.159
	I0116 03:49:57.389095       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0116 03:49:57.389234       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0116 03:49:57.392488       1 server_others.go:152] "Using iptables Proxier"
	I0116 03:49:57.393098       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0116 03:49:57.393446       1 server.go:846] "Version info" version="v1.28.4"
	I0116 03:49:57.393734       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0116 03:49:57.395849       1 config.go:188] "Starting service config controller"
	I0116 03:49:57.396793       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0116 03:49:57.397065       1 config.go:97] "Starting endpoint slice config controller"
	I0116 03:49:57.397173       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0116 03:49:57.398738       1 config.go:315] "Starting node config controller"
	I0116 03:49:57.398780       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0116 03:49:57.497648       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0116 03:49:57.497714       1 shared_informer.go:318] Caches are synced for service config
	I0116 03:49:57.498828       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [994d09ee15ce2df74bf9fd5ab55ee26cac0ce20a59cd56abc045ed57a6b95028] <==
	W0116 03:49:37.657346       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0116 03:49:37.657444       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0116 03:49:37.736677       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0116 03:49:37.736843       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0116 03:49:37.738590       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0116 03:49:37.738647       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0116 03:49:37.758548       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0116 03:49:37.758614       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0116 03:49:37.857621       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0116 03:49:37.857758       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0116 03:49:37.895101       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0116 03:49:37.895307       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0116 03:49:37.943032       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0116 03:49:37.943205       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0116 03:49:37.972383       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0116 03:49:37.972482       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0116 03:49:37.999715       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0116 03:49:37.999826       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0116 03:49:38.068923       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0116 03:49:38.069017       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0116 03:49:38.126926       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0116 03:49:38.127040       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0116 03:49:38.140637       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0116 03:49:38.140737       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0116 03:49:39.598786       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-01-16 03:44:23 UTC, ends at Tue 2024-01-16 04:04:10 UTC. --
	Jan 16 04:01:30 embed-certs-615980 kubelet[3854]: E0116 04:01:30.450312    3854 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fc7tx" podUID="14a38c13-7a9e-4548-9654-c568ede29e0f"
	Jan 16 04:01:40 embed-certs-615980 kubelet[3854]: E0116 04:01:40.525892    3854 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 16 04:01:40 embed-certs-615980 kubelet[3854]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 16 04:01:40 embed-certs-615980 kubelet[3854]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 16 04:01:40 embed-certs-615980 kubelet[3854]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 16 04:01:44 embed-certs-615980 kubelet[3854]: E0116 04:01:44.450830    3854 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fc7tx" podUID="14a38c13-7a9e-4548-9654-c568ede29e0f"
	Jan 16 04:01:56 embed-certs-615980 kubelet[3854]: E0116 04:01:56.451393    3854 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fc7tx" podUID="14a38c13-7a9e-4548-9654-c568ede29e0f"
	Jan 16 04:02:09 embed-certs-615980 kubelet[3854]: E0116 04:02:09.450343    3854 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fc7tx" podUID="14a38c13-7a9e-4548-9654-c568ede29e0f"
	Jan 16 04:02:24 embed-certs-615980 kubelet[3854]: E0116 04:02:24.452752    3854 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fc7tx" podUID="14a38c13-7a9e-4548-9654-c568ede29e0f"
	Jan 16 04:02:38 embed-certs-615980 kubelet[3854]: E0116 04:02:38.450928    3854 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fc7tx" podUID="14a38c13-7a9e-4548-9654-c568ede29e0f"
	Jan 16 04:02:40 embed-certs-615980 kubelet[3854]: E0116 04:02:40.526030    3854 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 16 04:02:40 embed-certs-615980 kubelet[3854]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 16 04:02:40 embed-certs-615980 kubelet[3854]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 16 04:02:40 embed-certs-615980 kubelet[3854]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 16 04:02:51 embed-certs-615980 kubelet[3854]: E0116 04:02:51.453336    3854 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fc7tx" podUID="14a38c13-7a9e-4548-9654-c568ede29e0f"
	Jan 16 04:03:02 embed-certs-615980 kubelet[3854]: E0116 04:03:02.450501    3854 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fc7tx" podUID="14a38c13-7a9e-4548-9654-c568ede29e0f"
	Jan 16 04:03:16 embed-certs-615980 kubelet[3854]: E0116 04:03:16.450660    3854 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fc7tx" podUID="14a38c13-7a9e-4548-9654-c568ede29e0f"
	Jan 16 04:03:28 embed-certs-615980 kubelet[3854]: E0116 04:03:28.451646    3854 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fc7tx" podUID="14a38c13-7a9e-4548-9654-c568ede29e0f"
	Jan 16 04:03:40 embed-certs-615980 kubelet[3854]: E0116 04:03:40.526478    3854 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 16 04:03:40 embed-certs-615980 kubelet[3854]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 16 04:03:40 embed-certs-615980 kubelet[3854]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 16 04:03:40 embed-certs-615980 kubelet[3854]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 16 04:03:42 embed-certs-615980 kubelet[3854]: E0116 04:03:42.451295    3854 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fc7tx" podUID="14a38c13-7a9e-4548-9654-c568ede29e0f"
	Jan 16 04:03:53 embed-certs-615980 kubelet[3854]: E0116 04:03:53.450477    3854 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fc7tx" podUID="14a38c13-7a9e-4548-9654-c568ede29e0f"
	Jan 16 04:04:06 embed-certs-615980 kubelet[3854]: E0116 04:04:06.450897    3854 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fc7tx" podUID="14a38c13-7a9e-4548-9654-c568ede29e0f"
	
	
	==> storage-provisioner [4cdf5e29ffaf0a16aac76782f118de80dd83ad4f7f8c86a00d36f2ca5059e03a] <==
	I0116 03:49:57.183870       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0116 03:49:57.212307       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0116 03:49:57.212411       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0116 03:49:57.228568       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0116 03:49:57.230108       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-615980_67d79c8d-fe8f-4708-af27-dd948672dc91!
	I0116 03:49:57.238842       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"96a2d4c1-0420-4551-81c2-61a9af9a83b8", APIVersion:"v1", ResourceVersion:"453", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-615980_67d79c8d-fe8f-4708-af27-dd948672dc91 became leader
	I0116 03:49:57.332323       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-615980_67d79c8d-fe8f-4708-af27-dd948672dc91!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-615980 -n embed-certs-615980
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-615980 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-fc7tx
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-615980 describe pod metrics-server-57f55c9bc5-fc7tx
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-615980 describe pod metrics-server-57f55c9bc5-fc7tx: exit status 1 (103.974786ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-fc7tx" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-615980 describe pod metrics-server-57f55c9bc5-fc7tx: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (308.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (205.35s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0116 04:00:41.231591  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/ingress-addon-legacy-873808/client.crt: no such file or directory
E0116 04:01:49.160566  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/functional-193417/client.crt: no such file or directory
E0116 04:02:19.246527  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/addons-690916/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-696770 -n old-k8s-version-696770
start_stop_delete_test.go:287: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-01-16 04:03:29.713313324 +0000 UTC m=+5353.651535989
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-696770 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-696770 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.144µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-696770 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-696770 -n old-k8s-version-696770
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-696770 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-696770 logs -n 25: (1.79127305s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p no-preload-666547                                   | no-preload-666547            | jenkins | v1.32.0 | 16 Jan 24 03:33 UTC | 16 Jan 24 03:35 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| ssh     | cert-options-977008 ssh                                | cert-options-977008          | jenkins | v1.32.0 | 16 Jan 24 03:34 UTC | 16 Jan 24 03:34 UTC |
	|         | openssl x509 -text -noout -in                          |                              |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                              |         |         |                     |                     |
	| ssh     | -p cert-options-977008 -- sudo                         | cert-options-977008          | jenkins | v1.32.0 | 16 Jan 24 03:34 UTC | 16 Jan 24 03:34 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-977008                                 | cert-options-977008          | jenkins | v1.32.0 | 16 Jan 24 03:34 UTC | 16 Jan 24 03:34 UTC |
	| start   | -p embed-certs-615980                                  | embed-certs-615980           | jenkins | v1.32.0 | 16 Jan 24 03:34 UTC | 16 Jan 24 03:35 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-690771                              | cert-expiration-690771       | jenkins | v1.32.0 | 16 Jan 24 03:35 UTC | 16 Jan 24 03:36 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-615980            | embed-certs-615980           | jenkins | v1.32.0 | 16 Jan 24 03:35 UTC | 16 Jan 24 03:35 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-615980                                  | embed-certs-615980           | jenkins | v1.32.0 | 16 Jan 24 03:35 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-666547             | no-preload-666547            | jenkins | v1.32.0 | 16 Jan 24 03:35 UTC | 16 Jan 24 03:35 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-666547                                   | no-preload-666547            | jenkins | v1.32.0 | 16 Jan 24 03:35 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-696770        | old-k8s-version-696770       | jenkins | v1.32.0 | 16 Jan 24 03:36 UTC | 16 Jan 24 03:36 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-696770                              | old-k8s-version-696770       | jenkins | v1.32.0 | 16 Jan 24 03:36 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-690771                              | cert-expiration-690771       | jenkins | v1.32.0 | 16 Jan 24 03:36 UTC | 16 Jan 24 03:36 UTC |
	| delete  | -p                                                     | disable-driver-mounts-673948 | jenkins | v1.32.0 | 16 Jan 24 03:36 UTC | 16 Jan 24 03:36 UTC |
	|         | disable-driver-mounts-673948                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-434445 | jenkins | v1.32.0 | 16 Jan 24 03:36 UTC | 16 Jan 24 03:37 UTC |
	|         | default-k8s-diff-port-434445                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-434445  | default-k8s-diff-port-434445 | jenkins | v1.32.0 | 16 Jan 24 03:37 UTC | 16 Jan 24 03:37 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-434445 | jenkins | v1.32.0 | 16 Jan 24 03:37 UTC |                     |
	|         | default-k8s-diff-port-434445                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-615980                 | embed-certs-615980           | jenkins | v1.32.0 | 16 Jan 24 03:38 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-666547                  | no-preload-666547            | jenkins | v1.32.0 | 16 Jan 24 03:38 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-615980                                  | embed-certs-615980           | jenkins | v1.32.0 | 16 Jan 24 03:38 UTC | 16 Jan 24 03:49 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p no-preload-666547                                   | no-preload-666547            | jenkins | v1.32.0 | 16 Jan 24 03:38 UTC | 16 Jan 24 03:48 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-696770             | old-k8s-version-696770       | jenkins | v1.32.0 | 16 Jan 24 03:38 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-696770                              | old-k8s-version-696770       | jenkins | v1.32.0 | 16 Jan 24 03:38 UTC | 16 Jan 24 03:51 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-434445       | default-k8s-diff-port-434445 | jenkins | v1.32.0 | 16 Jan 24 03:40 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-434445 | jenkins | v1.32.0 | 16 Jan 24 03:40 UTC | 16 Jan 24 03:49 UTC |
	|         | default-k8s-diff-port-434445                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/16 03:40:16
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0116 03:40:16.605622  507889 out.go:296] Setting OutFile to fd 1 ...
	I0116 03:40:16.605883  507889 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:40:16.605892  507889 out.go:309] Setting ErrFile to fd 2...
	I0116 03:40:16.605897  507889 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:40:16.606102  507889 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17965-468241/.minikube/bin
	I0116 03:40:16.606721  507889 out.go:303] Setting JSON to false
	I0116 03:40:16.607781  507889 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":15769,"bootTime":1705360648,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0116 03:40:16.607865  507889 start.go:138] virtualization: kvm guest
	I0116 03:40:16.610269  507889 out.go:177] * [default-k8s-diff-port-434445] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0116 03:40:16.611862  507889 out.go:177]   - MINIKUBE_LOCATION=17965
	I0116 03:40:16.611954  507889 notify.go:220] Checking for updates...
	I0116 03:40:16.613306  507889 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 03:40:16.615094  507889 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17965-468241/kubeconfig
	I0116 03:40:16.617044  507889 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17965-468241/.minikube
	I0116 03:40:16.618932  507889 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0116 03:40:16.621159  507889 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0116 03:40:16.623616  507889 config.go:182] Loaded profile config "default-k8s-diff-port-434445": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 03:40:16.624273  507889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:40:16.624363  507889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:40:16.640065  507889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45633
	I0116 03:40:16.640642  507889 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:40:16.641273  507889 main.go:141] libmachine: Using API Version  1
	I0116 03:40:16.641301  507889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:40:16.641696  507889 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:40:16.641901  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .DriverName
	I0116 03:40:16.642227  507889 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 03:40:16.642599  507889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:40:16.642684  507889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:40:16.658198  507889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41877
	I0116 03:40:16.658657  507889 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:40:16.659207  507889 main.go:141] libmachine: Using API Version  1
	I0116 03:40:16.659233  507889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:40:16.659588  507889 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:40:16.659844  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .DriverName
	I0116 03:40:16.698770  507889 out.go:177] * Using the kvm2 driver based on existing profile
	I0116 03:40:16.700307  507889 start.go:298] selected driver: kvm2
	I0116 03:40:16.700330  507889 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-434445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-434445 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.50.236 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 03:40:16.700478  507889 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0116 03:40:16.701296  507889 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 03:40:16.701389  507889 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17965-468241/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0116 03:40:16.717988  507889 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0116 03:40:16.718426  507889 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0116 03:40:16.718515  507889 cni.go:84] Creating CNI manager for ""
	I0116 03:40:16.718532  507889 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:40:16.718547  507889 start_flags.go:321] config:
	{Name:default-k8s-diff-port-434445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-43444
5 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.50.236 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 03:40:16.718765  507889 iso.go:125] acquiring lock: {Name:mk2f3231f0eeeb23816ac363851489181398d1c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 03:40:16.721292  507889 out.go:177] * Starting control plane node default-k8s-diff-port-434445 in cluster default-k8s-diff-port-434445
	I0116 03:40:16.722858  507889 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 03:40:16.722928  507889 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0116 03:40:16.722942  507889 cache.go:56] Caching tarball of preloaded images
	I0116 03:40:16.723044  507889 preload.go:174] Found /home/jenkins/minikube-integration/17965-468241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0116 03:40:16.723057  507889 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0116 03:40:16.723243  507889 profile.go:148] Saving config to /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/default-k8s-diff-port-434445/config.json ...
	I0116 03:40:16.723502  507889 start.go:365] acquiring machines lock for default-k8s-diff-port-434445: {Name:mk901e5fae8c1a578d989b520053c709bdbdcf06 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0116 03:40:22.140399  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:40:25.212385  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:40:31.292386  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:40:34.364375  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:40:40.444398  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:40:43.516372  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:40:49.596388  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:40:52.668394  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:40:58.748342  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:41:01.820436  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:41:07.900338  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:41:10.972410  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:41:17.052384  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:41:20.124427  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:41:26.204371  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:41:29.276361  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:41:35.356391  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:41:38.428383  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:41:44.508324  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:41:47.580377  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:41:53.660360  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:41:56.732377  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:42:02.812345  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:42:05.884406  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:42:11.964398  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:42:15.036469  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:42:21.116391  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:42:24.188397  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:42:30.268400  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:42:33.340416  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:42:39.420405  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:42:42.492396  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:42:48.572396  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:42:51.644367  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:42:57.724419  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:43:00.796427  507257 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.159:22: connect: no route to host
	I0116 03:43:03.800669  507339 start.go:369] acquired machines lock for "no-preload-666547" in 4m33.073406767s
	I0116 03:43:03.800732  507339 start.go:96] Skipping create...Using existing machine configuration
	I0116 03:43:03.800744  507339 fix.go:54] fixHost starting: 
	I0116 03:43:03.801330  507339 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:43:03.801381  507339 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:43:03.817309  507339 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46189
	I0116 03:43:03.817841  507339 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:43:03.818376  507339 main.go:141] libmachine: Using API Version  1
	I0116 03:43:03.818403  507339 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:43:03.818801  507339 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:43:03.819049  507339 main.go:141] libmachine: (no-preload-666547) Calling .DriverName
	I0116 03:43:03.819206  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetState
	I0116 03:43:03.821006  507339 fix.go:102] recreateIfNeeded on no-preload-666547: state=Stopped err=<nil>
	I0116 03:43:03.821031  507339 main.go:141] libmachine: (no-preload-666547) Calling .DriverName
	W0116 03:43:03.821210  507339 fix.go:128] unexpected machine state, will restart: <nil>
	I0116 03:43:03.823341  507339 out.go:177] * Restarting existing kvm2 VM for "no-preload-666547" ...
	I0116 03:43:03.824887  507339 main.go:141] libmachine: (no-preload-666547) Calling .Start
	I0116 03:43:03.825070  507339 main.go:141] libmachine: (no-preload-666547) Ensuring networks are active...
	I0116 03:43:03.825806  507339 main.go:141] libmachine: (no-preload-666547) Ensuring network default is active
	I0116 03:43:03.826151  507339 main.go:141] libmachine: (no-preload-666547) Ensuring network mk-no-preload-666547 is active
	I0116 03:43:03.826549  507339 main.go:141] libmachine: (no-preload-666547) Getting domain xml...
	I0116 03:43:03.827209  507339 main.go:141] libmachine: (no-preload-666547) Creating domain...
	I0116 03:43:04.166757  507339 main.go:141] libmachine: (no-preload-666547) Waiting to get IP...
	I0116 03:43:04.167846  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:04.168294  507339 main.go:141] libmachine: (no-preload-666547) DBG | unable to find current IP address of domain no-preload-666547 in network mk-no-preload-666547
	I0116 03:43:04.168400  507339 main.go:141] libmachine: (no-preload-666547) DBG | I0116 03:43:04.168281  508330 retry.go:31] will retry after 236.684347ms: waiting for machine to come up
	I0116 03:43:04.407068  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:04.407590  507339 main.go:141] libmachine: (no-preload-666547) DBG | unable to find current IP address of domain no-preload-666547 in network mk-no-preload-666547
	I0116 03:43:04.407626  507339 main.go:141] libmachine: (no-preload-666547) DBG | I0116 03:43:04.407520  508330 retry.go:31] will retry after 273.512454ms: waiting for machine to come up
	I0116 03:43:04.683173  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:04.683724  507339 main.go:141] libmachine: (no-preload-666547) DBG | unable to find current IP address of domain no-preload-666547 in network mk-no-preload-666547
	I0116 03:43:04.683759  507339 main.go:141] libmachine: (no-preload-666547) DBG | I0116 03:43:04.683652  508330 retry.go:31] will retry after 404.396132ms: waiting for machine to come up
	I0116 03:43:05.089306  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:05.089659  507339 main.go:141] libmachine: (no-preload-666547) DBG | unable to find current IP address of domain no-preload-666547 in network mk-no-preload-666547
	I0116 03:43:05.089687  507339 main.go:141] libmachine: (no-preload-666547) DBG | I0116 03:43:05.089612  508330 retry.go:31] will retry after 373.291662ms: waiting for machine to come up
	I0116 03:43:05.464216  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:05.464745  507339 main.go:141] libmachine: (no-preload-666547) DBG | unable to find current IP address of domain no-preload-666547 in network mk-no-preload-666547
	I0116 03:43:05.464772  507339 main.go:141] libmachine: (no-preload-666547) DBG | I0116 03:43:05.464696  508330 retry.go:31] will retry after 509.048348ms: waiting for machine to come up
	I0116 03:43:03.798483  507257 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 03:43:03.798553  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHHostname
	I0116 03:43:03.800507  507257 machine.go:91] provisioned docker machine in 4m37.39429533s
	I0116 03:43:03.800559  507257 fix.go:56] fixHost completed within 4m37.41769564s
	I0116 03:43:03.800568  507257 start.go:83] releasing machines lock for "embed-certs-615980", held for 4m37.417718822s
	W0116 03:43:03.800599  507257 start.go:694] error starting host: provision: host is not running
	W0116 03:43:03.800747  507257 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0116 03:43:03.800759  507257 start.go:709] Will try again in 5 seconds ...
	I0116 03:43:05.975369  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:05.975831  507339 main.go:141] libmachine: (no-preload-666547) DBG | unable to find current IP address of domain no-preload-666547 in network mk-no-preload-666547
	I0116 03:43:05.975864  507339 main.go:141] libmachine: (no-preload-666547) DBG | I0116 03:43:05.975776  508330 retry.go:31] will retry after 631.077965ms: waiting for machine to come up
	I0116 03:43:06.608722  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:06.609133  507339 main.go:141] libmachine: (no-preload-666547) DBG | unable to find current IP address of domain no-preload-666547 in network mk-no-preload-666547
	I0116 03:43:06.609162  507339 main.go:141] libmachine: (no-preload-666547) DBG | I0116 03:43:06.609074  508330 retry.go:31] will retry after 1.047586363s: waiting for machine to come up
	I0116 03:43:07.658264  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:07.658645  507339 main.go:141] libmachine: (no-preload-666547) DBG | unable to find current IP address of domain no-preload-666547 in network mk-no-preload-666547
	I0116 03:43:07.658696  507339 main.go:141] libmachine: (no-preload-666547) DBG | I0116 03:43:07.658591  508330 retry.go:31] will retry after 1.038644854s: waiting for machine to come up
	I0116 03:43:08.698946  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:08.699384  507339 main.go:141] libmachine: (no-preload-666547) DBG | unable to find current IP address of domain no-preload-666547 in network mk-no-preload-666547
	I0116 03:43:08.699411  507339 main.go:141] libmachine: (no-preload-666547) DBG | I0116 03:43:08.699347  508330 retry.go:31] will retry after 1.362032973s: waiting for machine to come up
	I0116 03:43:10.063269  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:10.063764  507339 main.go:141] libmachine: (no-preload-666547) DBG | unable to find current IP address of domain no-preload-666547 in network mk-no-preload-666547
	I0116 03:43:10.063792  507339 main.go:141] libmachine: (no-preload-666547) DBG | I0116 03:43:10.063714  508330 retry.go:31] will retry after 1.432317286s: waiting for machine to come up
	I0116 03:43:08.802803  507257 start.go:365] acquiring machines lock for embed-certs-615980: {Name:mk901e5fae8c1a578d989b520053c709bdbdcf06 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0116 03:43:11.498235  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:11.498714  507339 main.go:141] libmachine: (no-preload-666547) DBG | unable to find current IP address of domain no-preload-666547 in network mk-no-preload-666547
	I0116 03:43:11.498748  507339 main.go:141] libmachine: (no-preload-666547) DBG | I0116 03:43:11.498650  508330 retry.go:31] will retry after 2.490630326s: waiting for machine to come up
	I0116 03:43:13.991256  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:13.991717  507339 main.go:141] libmachine: (no-preload-666547) DBG | unable to find current IP address of domain no-preload-666547 in network mk-no-preload-666547
	I0116 03:43:13.991752  507339 main.go:141] libmachine: (no-preload-666547) DBG | I0116 03:43:13.991662  508330 retry.go:31] will retry after 3.569049736s: waiting for machine to come up
	I0116 03:43:17.565524  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:17.565893  507339 main.go:141] libmachine: (no-preload-666547) DBG | unable to find current IP address of domain no-preload-666547 in network mk-no-preload-666547
	I0116 03:43:17.565916  507339 main.go:141] libmachine: (no-preload-666547) DBG | I0116 03:43:17.565850  508330 retry.go:31] will retry after 2.875259098s: waiting for machine to come up
	I0116 03:43:20.443998  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:20.444472  507339 main.go:141] libmachine: (no-preload-666547) DBG | unable to find current IP address of domain no-preload-666547 in network mk-no-preload-666547
	I0116 03:43:20.444495  507339 main.go:141] libmachine: (no-preload-666547) DBG | I0116 03:43:20.444438  508330 retry.go:31] will retry after 4.319647454s: waiting for machine to come up
	I0116 03:43:24.765311  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:24.765836  507339 main.go:141] libmachine: (no-preload-666547) Found IP for machine: 192.168.39.103
	I0116 03:43:24.765862  507339 main.go:141] libmachine: (no-preload-666547) Reserving static IP address...
	I0116 03:43:24.765879  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has current primary IP address 192.168.39.103 and MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:24.766413  507339 main.go:141] libmachine: (no-preload-666547) DBG | found host DHCP lease matching {name: "no-preload-666547", mac: "52:54:00:4e:5f:03", ip: "192.168.39.103"} in network mk-no-preload-666547: {Iface:virbr2 ExpiryTime:2024-01-16 04:43:15 +0000 UTC Type:0 Mac:52:54:00:4e:5f:03 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:no-preload-666547 Clientid:01:52:54:00:4e:5f:03}
	I0116 03:43:24.766543  507339 main.go:141] libmachine: (no-preload-666547) Reserved static IP address: 192.168.39.103
	I0116 03:43:24.766575  507339 main.go:141] libmachine: (no-preload-666547) DBG | skip adding static IP to network mk-no-preload-666547 - found existing host DHCP lease matching {name: "no-preload-666547", mac: "52:54:00:4e:5f:03", ip: "192.168.39.103"}
	I0116 03:43:24.766593  507339 main.go:141] libmachine: (no-preload-666547) DBG | Getting to WaitForSSH function...
	I0116 03:43:24.766607  507339 main.go:141] libmachine: (no-preload-666547) Waiting for SSH to be available...
	I0116 03:43:24.768801  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:24.769145  507339 main.go:141] libmachine: (no-preload-666547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:5f:03", ip: ""} in network mk-no-preload-666547: {Iface:virbr2 ExpiryTime:2024-01-16 04:43:15 +0000 UTC Type:0 Mac:52:54:00:4e:5f:03 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:no-preload-666547 Clientid:01:52:54:00:4e:5f:03}
	I0116 03:43:24.769180  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined IP address 192.168.39.103 and MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:24.769392  507339 main.go:141] libmachine: (no-preload-666547) DBG | Using SSH client type: external
	I0116 03:43:24.769446  507339 main.go:141] libmachine: (no-preload-666547) DBG | Using SSH private key: /home/jenkins/minikube-integration/17965-468241/.minikube/machines/no-preload-666547/id_rsa (-rw-------)
	I0116 03:43:24.769490  507339 main.go:141] libmachine: (no-preload-666547) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.103 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17965-468241/.minikube/machines/no-preload-666547/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0116 03:43:24.769539  507339 main.go:141] libmachine: (no-preload-666547) DBG | About to run SSH command:
	I0116 03:43:24.769557  507339 main.go:141] libmachine: (no-preload-666547) DBG | exit 0
	I0116 03:43:24.860928  507339 main.go:141] libmachine: (no-preload-666547) DBG | SSH cmd err, output: <nil>: 
	I0116 03:43:24.861324  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetConfigRaw
	I0116 03:43:24.862217  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetIP
	I0116 03:43:24.865100  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:24.865468  507339 main.go:141] libmachine: (no-preload-666547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:5f:03", ip: ""} in network mk-no-preload-666547: {Iface:virbr2 ExpiryTime:2024-01-16 04:43:15 +0000 UTC Type:0 Mac:52:54:00:4e:5f:03 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:no-preload-666547 Clientid:01:52:54:00:4e:5f:03}
	I0116 03:43:24.865503  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined IP address 192.168.39.103 and MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:24.865804  507339 profile.go:148] Saving config to /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/no-preload-666547/config.json ...
	I0116 03:43:24.866064  507339 machine.go:88] provisioning docker machine ...
	I0116 03:43:24.866091  507339 main.go:141] libmachine: (no-preload-666547) Calling .DriverName
	I0116 03:43:24.866374  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetMachineName
	I0116 03:43:24.866590  507339 buildroot.go:166] provisioning hostname "no-preload-666547"
	I0116 03:43:24.866613  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetMachineName
	I0116 03:43:24.866795  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHHostname
	I0116 03:43:24.869231  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:24.869587  507339 main.go:141] libmachine: (no-preload-666547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:5f:03", ip: ""} in network mk-no-preload-666547: {Iface:virbr2 ExpiryTime:2024-01-16 04:43:15 +0000 UTC Type:0 Mac:52:54:00:4e:5f:03 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:no-preload-666547 Clientid:01:52:54:00:4e:5f:03}
	I0116 03:43:24.869623  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined IP address 192.168.39.103 and MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:24.869778  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHPort
	I0116 03:43:24.870002  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHKeyPath
	I0116 03:43:24.870168  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHKeyPath
	I0116 03:43:24.870304  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHUsername
	I0116 03:43:24.870455  507339 main.go:141] libmachine: Using SSH client type: native
	I0116 03:43:24.870929  507339 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I0116 03:43:24.870949  507339 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-666547 && echo "no-preload-666547" | sudo tee /etc/hostname
	I0116 03:43:25.005390  507339 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-666547
	
	I0116 03:43:25.005425  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHHostname
	I0116 03:43:25.008410  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:25.008801  507339 main.go:141] libmachine: (no-preload-666547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:5f:03", ip: ""} in network mk-no-preload-666547: {Iface:virbr2 ExpiryTime:2024-01-16 04:43:15 +0000 UTC Type:0 Mac:52:54:00:4e:5f:03 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:no-preload-666547 Clientid:01:52:54:00:4e:5f:03}
	I0116 03:43:25.008836  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined IP address 192.168.39.103 and MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:25.009007  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHPort
	I0116 03:43:25.009269  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHKeyPath
	I0116 03:43:25.009432  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHKeyPath
	I0116 03:43:25.009561  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHUsername
	I0116 03:43:25.009722  507339 main.go:141] libmachine: Using SSH client type: native
	I0116 03:43:25.010051  507339 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I0116 03:43:25.010071  507339 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-666547' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-666547/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-666547' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 03:43:25.142889  507339 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 03:43:25.142928  507339 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17965-468241/.minikube CaCertPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17965-468241/.minikube}
	I0116 03:43:25.142950  507339 buildroot.go:174] setting up certificates
	I0116 03:43:25.142963  507339 provision.go:83] configureAuth start
	I0116 03:43:25.142973  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetMachineName
	I0116 03:43:25.143294  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetIP
	I0116 03:43:25.146355  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:25.146746  507339 main.go:141] libmachine: (no-preload-666547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:5f:03", ip: ""} in network mk-no-preload-666547: {Iface:virbr2 ExpiryTime:2024-01-16 04:43:15 +0000 UTC Type:0 Mac:52:54:00:4e:5f:03 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:no-preload-666547 Clientid:01:52:54:00:4e:5f:03}
	I0116 03:43:25.146767  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined IP address 192.168.39.103 and MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:25.147063  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHHostname
	I0116 03:43:25.149867  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:25.150231  507339 main.go:141] libmachine: (no-preload-666547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:5f:03", ip: ""} in network mk-no-preload-666547: {Iface:virbr2 ExpiryTime:2024-01-16 04:43:15 +0000 UTC Type:0 Mac:52:54:00:4e:5f:03 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:no-preload-666547 Clientid:01:52:54:00:4e:5f:03}
	I0116 03:43:25.150260  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined IP address 192.168.39.103 and MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:25.150448  507339 provision.go:138] copyHostCerts
	I0116 03:43:25.150531  507339 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem, removing ...
	I0116 03:43:25.150543  507339 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem
	I0116 03:43:25.150618  507339 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem (1123 bytes)
	I0116 03:43:25.150719  507339 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem, removing ...
	I0116 03:43:25.150729  507339 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem
	I0116 03:43:25.150755  507339 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem (1679 bytes)
	I0116 03:43:25.150815  507339 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem, removing ...
	I0116 03:43:25.150823  507339 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem
	I0116 03:43:25.150843  507339 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem (1078 bytes)
	I0116 03:43:25.150888  507339 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca-key.pem org=jenkins.no-preload-666547 san=[192.168.39.103 192.168.39.103 localhost 127.0.0.1 minikube no-preload-666547]
	I0116 03:43:25.417982  507339 provision.go:172] copyRemoteCerts
	I0116 03:43:25.418059  507339 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 03:43:25.418088  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHHostname
	I0116 03:43:25.420908  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:25.421196  507339 main.go:141] libmachine: (no-preload-666547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:5f:03", ip: ""} in network mk-no-preload-666547: {Iface:virbr2 ExpiryTime:2024-01-16 04:43:15 +0000 UTC Type:0 Mac:52:54:00:4e:5f:03 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:no-preload-666547 Clientid:01:52:54:00:4e:5f:03}
	I0116 03:43:25.421235  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined IP address 192.168.39.103 and MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:25.421372  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHPort
	I0116 03:43:25.421609  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHKeyPath
	I0116 03:43:25.421782  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHUsername
	I0116 03:43:25.421952  507339 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/no-preload-666547/id_rsa Username:docker}
	I0116 03:43:25.509876  507339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0116 03:43:25.534885  507339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0116 03:43:25.562593  507339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0116 03:43:25.590106  507339 provision.go:86] duration metric: configureAuth took 447.124389ms
	I0116 03:43:25.590145  507339 buildroot.go:189] setting minikube options for container-runtime
	I0116 03:43:25.590386  507339 config.go:182] Loaded profile config "no-preload-666547": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0116 03:43:25.590475  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHHostname
	I0116 03:43:25.593695  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:25.594125  507339 main.go:141] libmachine: (no-preload-666547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:5f:03", ip: ""} in network mk-no-preload-666547: {Iface:virbr2 ExpiryTime:2024-01-16 04:43:15 +0000 UTC Type:0 Mac:52:54:00:4e:5f:03 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:no-preload-666547 Clientid:01:52:54:00:4e:5f:03}
	I0116 03:43:25.594182  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined IP address 192.168.39.103 and MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:25.594407  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHPort
	I0116 03:43:25.594661  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHKeyPath
	I0116 03:43:25.594851  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHKeyPath
	I0116 03:43:25.595124  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHUsername
	I0116 03:43:25.595362  507339 main.go:141] libmachine: Using SSH client type: native
	I0116 03:43:25.595735  507339 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I0116 03:43:25.595753  507339 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 03:43:26.177541  507510 start.go:369] acquired machines lock for "old-k8s-version-696770" in 4m36.503560035s
	I0116 03:43:26.177612  507510 start.go:96] Skipping create...Using existing machine configuration
	I0116 03:43:26.177621  507510 fix.go:54] fixHost starting: 
	I0116 03:43:26.178073  507510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:43:26.178115  507510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:43:26.194930  507510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35119
	I0116 03:43:26.195420  507510 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:43:26.195898  507510 main.go:141] libmachine: Using API Version  1
	I0116 03:43:26.195925  507510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:43:26.196303  507510 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:43:26.196517  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .DriverName
	I0116 03:43:26.196797  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetState
	I0116 03:43:26.198728  507510 fix.go:102] recreateIfNeeded on old-k8s-version-696770: state=Stopped err=<nil>
	I0116 03:43:26.198759  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .DriverName
	W0116 03:43:26.198959  507510 fix.go:128] unexpected machine state, will restart: <nil>
	I0116 03:43:26.201929  507510 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-696770" ...
	I0116 03:43:25.916953  507339 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 03:43:25.916987  507339 machine.go:91] provisioned docker machine in 1.05090319s
	I0116 03:43:25.917013  507339 start.go:300] post-start starting for "no-preload-666547" (driver="kvm2")
	I0116 03:43:25.917045  507339 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 03:43:25.917070  507339 main.go:141] libmachine: (no-preload-666547) Calling .DriverName
	I0116 03:43:25.917472  507339 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 03:43:25.917510  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHHostname
	I0116 03:43:25.920700  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:25.921097  507339 main.go:141] libmachine: (no-preload-666547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:5f:03", ip: ""} in network mk-no-preload-666547: {Iface:virbr2 ExpiryTime:2024-01-16 04:43:15 +0000 UTC Type:0 Mac:52:54:00:4e:5f:03 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:no-preload-666547 Clientid:01:52:54:00:4e:5f:03}
	I0116 03:43:25.921132  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined IP address 192.168.39.103 and MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:25.921386  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHPort
	I0116 03:43:25.921663  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHKeyPath
	I0116 03:43:25.921877  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHUsername
	I0116 03:43:25.922086  507339 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/no-preload-666547/id_rsa Username:docker}
	I0116 03:43:26.011987  507339 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 03:43:26.016777  507339 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 03:43:26.016813  507339 filesync.go:126] Scanning /home/jenkins/minikube-integration/17965-468241/.minikube/addons for local assets ...
	I0116 03:43:26.016889  507339 filesync.go:126] Scanning /home/jenkins/minikube-integration/17965-468241/.minikube/files for local assets ...
	I0116 03:43:26.016985  507339 filesync.go:149] local asset: /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem -> 4754782.pem in /etc/ssl/certs
	I0116 03:43:26.017109  507339 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 03:43:26.027303  507339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem --> /etc/ssl/certs/4754782.pem (1708 bytes)
	I0116 03:43:26.051806  507339 start.go:303] post-start completed in 134.758948ms
	I0116 03:43:26.051849  507339 fix.go:56] fixHost completed within 22.25110408s
	I0116 03:43:26.051881  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHHostname
	I0116 03:43:26.055165  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:26.055568  507339 main.go:141] libmachine: (no-preload-666547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:5f:03", ip: ""} in network mk-no-preload-666547: {Iface:virbr2 ExpiryTime:2024-01-16 04:43:15 +0000 UTC Type:0 Mac:52:54:00:4e:5f:03 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:no-preload-666547 Clientid:01:52:54:00:4e:5f:03}
	I0116 03:43:26.055605  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined IP address 192.168.39.103 and MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:26.055763  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHPort
	I0116 03:43:26.055983  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHKeyPath
	I0116 03:43:26.056222  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHKeyPath
	I0116 03:43:26.056407  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHUsername
	I0116 03:43:26.056579  507339 main.go:141] libmachine: Using SSH client type: native
	I0116 03:43:26.056930  507339 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I0116 03:43:26.056948  507339 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0116 03:43:26.177329  507339 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705376606.122912048
	
	I0116 03:43:26.177360  507339 fix.go:206] guest clock: 1705376606.122912048
	I0116 03:43:26.177367  507339 fix.go:219] Guest: 2024-01-16 03:43:26.122912048 +0000 UTC Remote: 2024-01-16 03:43:26.051855053 +0000 UTC m=+295.486361610 (delta=71.056995ms)
	I0116 03:43:26.177424  507339 fix.go:190] guest clock delta is within tolerance: 71.056995ms
	I0116 03:43:26.177430  507339 start.go:83] releasing machines lock for "no-preload-666547", held for 22.376720152s
	I0116 03:43:26.177461  507339 main.go:141] libmachine: (no-preload-666547) Calling .DriverName
	I0116 03:43:26.177761  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetIP
	I0116 03:43:26.180783  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:26.181087  507339 main.go:141] libmachine: (no-preload-666547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:5f:03", ip: ""} in network mk-no-preload-666547: {Iface:virbr2 ExpiryTime:2024-01-16 04:43:15 +0000 UTC Type:0 Mac:52:54:00:4e:5f:03 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:no-preload-666547 Clientid:01:52:54:00:4e:5f:03}
	I0116 03:43:26.181117  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined IP address 192.168.39.103 and MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:26.181281  507339 main.go:141] libmachine: (no-preload-666547) Calling .DriverName
	I0116 03:43:26.181876  507339 main.go:141] libmachine: (no-preload-666547) Calling .DriverName
	I0116 03:43:26.182068  507339 main.go:141] libmachine: (no-preload-666547) Calling .DriverName
	I0116 03:43:26.182154  507339 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 03:43:26.182203  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHHostname
	I0116 03:43:26.182337  507339 ssh_runner.go:195] Run: cat /version.json
	I0116 03:43:26.182366  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHHostname
	I0116 03:43:26.185253  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:26.185403  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:26.185625  507339 main.go:141] libmachine: (no-preload-666547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:5f:03", ip: ""} in network mk-no-preload-666547: {Iface:virbr2 ExpiryTime:2024-01-16 04:43:15 +0000 UTC Type:0 Mac:52:54:00:4e:5f:03 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:no-preload-666547 Clientid:01:52:54:00:4e:5f:03}
	I0116 03:43:26.185655  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined IP address 192.168.39.103 and MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:26.185807  507339 main.go:141] libmachine: (no-preload-666547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:5f:03", ip: ""} in network mk-no-preload-666547: {Iface:virbr2 ExpiryTime:2024-01-16 04:43:15 +0000 UTC Type:0 Mac:52:54:00:4e:5f:03 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:no-preload-666547 Clientid:01:52:54:00:4e:5f:03}
	I0116 03:43:26.185816  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHPort
	I0116 03:43:26.185855  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined IP address 192.168.39.103 and MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:26.185966  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHPort
	I0116 03:43:26.186041  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHKeyPath
	I0116 03:43:26.186137  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHKeyPath
	I0116 03:43:26.186220  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHUsername
	I0116 03:43:26.186306  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHUsername
	I0116 03:43:26.186383  507339 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/no-preload-666547/id_rsa Username:docker}
	I0116 03:43:26.186428  507339 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/no-preload-666547/id_rsa Username:docker}
	I0116 03:43:26.312441  507339 ssh_runner.go:195] Run: systemctl --version
	I0116 03:43:26.319016  507339 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 03:43:26.469427  507339 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0116 03:43:26.475759  507339 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 03:43:26.475896  507339 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 03:43:26.491920  507339 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0116 03:43:26.491952  507339 start.go:475] detecting cgroup driver to use...
	I0116 03:43:26.492112  507339 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 03:43:26.508122  507339 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 03:43:26.523664  507339 docker.go:217] disabling cri-docker service (if available) ...
	I0116 03:43:26.523754  507339 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 03:43:26.540173  507339 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 03:43:26.557370  507339 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 03:43:26.685134  507339 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 03:43:26.806555  507339 docker.go:233] disabling docker service ...
	I0116 03:43:26.806640  507339 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 03:43:26.821910  507339 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 03:43:26.836619  507339 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 03:43:26.950601  507339 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 03:43:27.077586  507339 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 03:43:27.091892  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 03:43:27.111772  507339 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0116 03:43:27.111856  507339 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:43:27.122183  507339 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 03:43:27.122261  507339 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:43:27.132861  507339 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:43:27.144003  507339 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:43:27.154747  507339 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 03:43:27.166236  507339 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 03:43:27.175337  507339 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0116 03:43:27.175410  507339 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0116 03:43:27.190891  507339 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 03:43:27.201216  507339 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 03:43:27.322701  507339 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 03:43:27.504197  507339 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 03:43:27.504292  507339 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 03:43:27.509879  507339 start.go:543] Will wait 60s for crictl version
	I0116 03:43:27.509972  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:43:27.514555  507339 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 03:43:27.556338  507339 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0116 03:43:27.556444  507339 ssh_runner.go:195] Run: crio --version
	I0116 03:43:27.615814  507339 ssh_runner.go:195] Run: crio --version
	I0116 03:43:27.666262  507339 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I0116 03:43:26.203694  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .Start
	I0116 03:43:26.203950  507510 main.go:141] libmachine: (old-k8s-version-696770) Ensuring networks are active...
	I0116 03:43:26.204831  507510 main.go:141] libmachine: (old-k8s-version-696770) Ensuring network default is active
	I0116 03:43:26.205251  507510 main.go:141] libmachine: (old-k8s-version-696770) Ensuring network mk-old-k8s-version-696770 is active
	I0116 03:43:26.205763  507510 main.go:141] libmachine: (old-k8s-version-696770) Getting domain xml...
	I0116 03:43:26.206485  507510 main.go:141] libmachine: (old-k8s-version-696770) Creating domain...
	I0116 03:43:26.558284  507510 main.go:141] libmachine: (old-k8s-version-696770) Waiting to get IP...
	I0116 03:43:26.559270  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:26.559701  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | unable to find current IP address of domain old-k8s-version-696770 in network mk-old-k8s-version-696770
	I0116 03:43:26.559793  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | I0116 03:43:26.559692  508427 retry.go:31] will retry after 243.799089ms: waiting for machine to come up
	I0116 03:43:26.805411  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:26.805914  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | unable to find current IP address of domain old-k8s-version-696770 in network mk-old-k8s-version-696770
	I0116 03:43:26.805948  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | I0116 03:43:26.805846  508427 retry.go:31] will retry after 346.727587ms: waiting for machine to come up
	I0116 03:43:27.154528  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:27.155074  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | unable to find current IP address of domain old-k8s-version-696770 in network mk-old-k8s-version-696770
	I0116 03:43:27.155107  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | I0116 03:43:27.155023  508427 retry.go:31] will retry after 357.633471ms: waiting for machine to come up
	I0116 03:43:27.514870  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:27.515490  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | unable to find current IP address of domain old-k8s-version-696770 in network mk-old-k8s-version-696770
	I0116 03:43:27.515517  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | I0116 03:43:27.515452  508427 retry.go:31] will retry after 582.001218ms: waiting for machine to come up
	I0116 03:43:28.099271  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:28.099783  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | unable to find current IP address of domain old-k8s-version-696770 in network mk-old-k8s-version-696770
	I0116 03:43:28.099817  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | I0116 03:43:28.099735  508427 retry.go:31] will retry after 747.661188ms: waiting for machine to come up
	I0116 03:43:28.849318  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:28.849836  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | unable to find current IP address of domain old-k8s-version-696770 in network mk-old-k8s-version-696770
	I0116 03:43:28.849872  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | I0116 03:43:28.849799  508427 retry.go:31] will retry after 953.610286ms: waiting for machine to come up
	I0116 03:43:27.667889  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetIP
	I0116 03:43:27.671385  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:27.671804  507339 main.go:141] libmachine: (no-preload-666547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:5f:03", ip: ""} in network mk-no-preload-666547: {Iface:virbr2 ExpiryTime:2024-01-16 04:43:15 +0000 UTC Type:0 Mac:52:54:00:4e:5f:03 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:no-preload-666547 Clientid:01:52:54:00:4e:5f:03}
	I0116 03:43:27.671840  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined IP address 192.168.39.103 and MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:43:27.672113  507339 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0116 03:43:27.676693  507339 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 03:43:27.690701  507339 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0116 03:43:27.690748  507339 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 03:43:27.731189  507339 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0116 03:43:27.731219  507339 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0116 03:43:27.731321  507339 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:43:27.731358  507339 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0116 03:43:27.731370  507339 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0116 03:43:27.731404  507339 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0116 03:43:27.731441  507339 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0116 03:43:27.731352  507339 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0116 03:43:27.731322  507339 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0116 03:43:27.731322  507339 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0116 03:43:27.733105  507339 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0116 03:43:27.733119  507339 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:43:27.733171  507339 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0116 03:43:27.733171  507339 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0116 03:43:27.733110  507339 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0116 03:43:27.733118  507339 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0116 03:43:27.733113  507339 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0116 03:43:27.733270  507339 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0116 03:43:27.900005  507339 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0116 03:43:27.901232  507339 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0116 03:43:27.903964  507339 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0116 03:43:27.907543  507339 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0116 03:43:27.908417  507339 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0116 03:43:27.909137  507339 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0116 03:43:27.953586  507339 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0116 03:43:28.024252  507339 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0116 03:43:28.024310  507339 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0116 03:43:28.024366  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:43:28.042716  507339 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:43:28.078379  507339 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0116 03:43:28.078438  507339 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0116 03:43:28.078503  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:43:28.179590  507339 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0116 03:43:28.179612  507339 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0116 03:43:28.179661  507339 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0116 03:43:28.179661  507339 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0116 03:43:28.179720  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:43:28.179722  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:43:28.179729  507339 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0116 03:43:28.179750  507339 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0116 03:43:28.179785  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:43:28.179804  507339 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0116 03:43:28.179865  507339 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0116 03:43:28.179906  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:43:28.179812  507339 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0116 03:43:28.179950  507339 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0116 03:43:28.179977  507339 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:43:28.180011  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:43:28.180009  507339 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0116 03:43:28.196999  507339 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0116 03:43:28.197021  507339 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0116 03:43:28.197157  507339 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0116 03:43:28.305002  507339 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0116 03:43:28.305117  507339 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0116 03:43:28.305044  507339 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:43:28.305231  507339 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0116 03:43:28.317016  507339 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0116 03:43:28.317149  507339 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0116 03:43:28.346291  507339 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0116 03:43:28.346393  507339 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0116 03:43:28.346434  507339 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0116 03:43:28.346518  507339 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0116 03:43:28.346547  507339 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0116 03:43:28.346598  507339 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0116 03:43:28.346618  507339 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0116 03:43:28.346631  507339 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0116 03:43:28.346650  507339 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0116 03:43:28.425129  507339 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0116 03:43:28.425217  507339 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0116 03:43:28.425319  507339 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0116 03:43:28.425317  507339 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0116 03:43:28.425377  507339 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0116 03:43:28.425391  507339 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0116 03:43:28.425441  507339 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0116 03:43:29.805277  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:29.805642  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | unable to find current IP address of domain old-k8s-version-696770 in network mk-old-k8s-version-696770
	I0116 03:43:29.805677  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | I0116 03:43:29.805586  508427 retry.go:31] will retry after 734.396993ms: waiting for machine to come up
	I0116 03:43:30.541337  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:30.541794  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | unable to find current IP address of domain old-k8s-version-696770 in network mk-old-k8s-version-696770
	I0116 03:43:30.541828  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | I0116 03:43:30.541741  508427 retry.go:31] will retry after 1.035836118s: waiting for machine to come up
	I0116 03:43:31.579576  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:31.580093  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | unable to find current IP address of domain old-k8s-version-696770 in network mk-old-k8s-version-696770
	I0116 03:43:31.580118  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | I0116 03:43:31.580070  508427 retry.go:31] will retry after 1.723172168s: waiting for machine to come up
	I0116 03:43:33.305247  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:33.305726  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | unable to find current IP address of domain old-k8s-version-696770 in network mk-old-k8s-version-696770
	I0116 03:43:33.305759  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | I0116 03:43:33.305669  508427 retry.go:31] will retry after 1.465747661s: waiting for machine to come up
	I0116 03:43:32.396858  507339 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (4.050189724s)
	I0116 03:43:32.396913  507339 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0116 03:43:32.396956  507339 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (3.971489155s)
	I0116 03:43:32.397006  507339 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0116 03:43:32.397028  507339 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (3.971686012s)
	I0116 03:43:32.397043  507339 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0116 03:43:32.397050  507339 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (4.050383438s)
	I0116 03:43:32.397063  507339 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0116 03:43:32.397093  507339 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0116 03:43:32.397172  507339 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0116 03:43:35.381615  507339 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.98440652s)
	I0116 03:43:35.381660  507339 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0116 03:43:35.381699  507339 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0116 03:43:35.381759  507339 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0116 03:43:34.773552  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:34.774149  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | unable to find current IP address of domain old-k8s-version-696770 in network mk-old-k8s-version-696770
	I0116 03:43:34.774182  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | I0116 03:43:34.774084  508427 retry.go:31] will retry after 1.94747868s: waiting for machine to come up
	I0116 03:43:36.722855  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:36.723416  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | unable to find current IP address of domain old-k8s-version-696770 in network mk-old-k8s-version-696770
	I0116 03:43:36.723448  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | I0116 03:43:36.723365  508427 retry.go:31] will retry after 2.550966562s: waiting for machine to come up
	I0116 03:43:39.276082  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:39.276671  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | unable to find current IP address of domain old-k8s-version-696770 in network mk-old-k8s-version-696770
	I0116 03:43:39.276710  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | I0116 03:43:39.276608  508427 retry.go:31] will retry after 3.317854993s: waiting for machine to come up
	I0116 03:43:38.162725  507339 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.780935577s)
	I0116 03:43:38.162760  507339 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0116 03:43:38.162792  507339 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0116 03:43:38.162843  507339 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0116 03:43:39.527575  507339 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.36469752s)
	I0116 03:43:39.527612  507339 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0116 03:43:39.527639  507339 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0116 03:43:39.527696  507339 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0116 03:43:42.595994  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:42.596424  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | unable to find current IP address of domain old-k8s-version-696770 in network mk-old-k8s-version-696770
	I0116 03:43:42.596458  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | I0116 03:43:42.596364  508427 retry.go:31] will retry after 4.913808783s: waiting for machine to come up
	I0116 03:43:41.690968  507339 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.16323953s)
	I0116 03:43:41.691007  507339 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0116 03:43:41.691045  507339 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0116 03:43:41.691100  507339 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0116 03:43:43.849988  507339 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.158855886s)
	I0116 03:43:43.850023  507339 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0116 03:43:43.850052  507339 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0116 03:43:43.850107  507339 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0116 03:43:44.597660  507339 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0116 03:43:44.597710  507339 cache_images.go:123] Successfully loaded all cached images
	I0116 03:43:44.597715  507339 cache_images.go:92] LoadImages completed in 16.866481277s
	I0116 03:43:44.597788  507339 ssh_runner.go:195] Run: crio config
	I0116 03:43:44.658055  507339 cni.go:84] Creating CNI manager for ""
	I0116 03:43:44.658081  507339 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:43:44.658104  507339 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 03:43:44.658124  507339 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.103 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-666547 NodeName:no-preload-666547 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.103"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.103 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0116 03:43:44.658290  507339 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.103
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-666547"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.103
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.103"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 03:43:44.658371  507339 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-666547 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.103
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-666547 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0116 03:43:44.658431  507339 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0116 03:43:44.668859  507339 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 03:43:44.668934  507339 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0116 03:43:44.678543  507339 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I0116 03:43:44.694998  507339 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0116 03:43:44.711256  507339 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I0116 03:43:44.728203  507339 ssh_runner.go:195] Run: grep 192.168.39.103	control-plane.minikube.internal$ /etc/hosts
	I0116 03:43:44.732219  507339 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.103	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 03:43:44.744687  507339 certs.go:56] Setting up /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/no-preload-666547 for IP: 192.168.39.103
	I0116 03:43:44.744730  507339 certs.go:190] acquiring lock for shared ca certs: {Name:mkab5aeea3e0ed69f3592dc8f418e07c6075b130 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:43:44.744957  507339 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17965-468241/.minikube/ca.key
	I0116 03:43:44.745014  507339 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.key
	I0116 03:43:44.745133  507339 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/no-preload-666547/client.key
	I0116 03:43:44.745226  507339 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/no-preload-666547/apiserver.key.f0189397
	I0116 03:43:44.745293  507339 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/no-preload-666547/proxy-client.key
	I0116 03:43:44.745431  507339 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478.pem (1338 bytes)
	W0116 03:43:44.745471  507339 certs.go:433] ignoring /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478_empty.pem, impossibly tiny 0 bytes
	I0116 03:43:44.745488  507339 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca-key.pem (1679 bytes)
	I0116 03:43:44.745541  507339 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem (1078 bytes)
	I0116 03:43:44.745582  507339 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem (1123 bytes)
	I0116 03:43:44.745620  507339 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem (1679 bytes)
	I0116 03:43:44.745687  507339 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem (1708 bytes)
	I0116 03:43:44.746558  507339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/no-preload-666547/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0116 03:43:44.770889  507339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/no-preload-666547/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0116 03:43:44.795150  507339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/no-preload-666547/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0116 03:43:44.818047  507339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/no-preload-666547/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0116 03:43:44.842003  507339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 03:43:44.866125  507339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0116 03:43:44.890235  507339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 03:43:44.913732  507339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 03:43:44.937249  507339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478.pem --> /usr/share/ca-certificates/475478.pem (1338 bytes)
	I0116 03:43:44.961628  507339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem --> /usr/share/ca-certificates/4754782.pem (1708 bytes)
	I0116 03:43:44.986672  507339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 03:43:45.010735  507339 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0116 03:43:45.028537  507339 ssh_runner.go:195] Run: openssl version
	I0116 03:43:45.034910  507339 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/475478.pem && ln -fs /usr/share/ca-certificates/475478.pem /etc/ssl/certs/475478.pem"
	I0116 03:43:45.046034  507339 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/475478.pem
	I0116 03:43:45.050965  507339 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 02:44 /usr/share/ca-certificates/475478.pem
	I0116 03:43:45.051059  507339 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/475478.pem
	I0116 03:43:45.057465  507339 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/475478.pem /etc/ssl/certs/51391683.0"
	I0116 03:43:45.068400  507339 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4754782.pem && ln -fs /usr/share/ca-certificates/4754782.pem /etc/ssl/certs/4754782.pem"
	I0116 03:43:45.079619  507339 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4754782.pem
	I0116 03:43:45.084545  507339 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 02:44 /usr/share/ca-certificates/4754782.pem
	I0116 03:43:45.084622  507339 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4754782.pem
	I0116 03:43:45.090638  507339 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4754782.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 03:43:45.101658  507339 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 03:43:45.113091  507339 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:43:45.118085  507339 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 02:35 /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:43:45.118153  507339 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:43:45.124100  507339 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 03:43:45.135338  507339 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 03:43:45.140230  507339 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0116 03:43:45.146566  507339 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0116 03:43:45.152839  507339 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0116 03:43:45.158917  507339 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0116 03:43:45.164984  507339 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0116 03:43:45.171049  507339 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0116 03:43:45.177547  507339 kubeadm.go:404] StartCluster: {Name:no-preload-666547 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:no-preload-666547 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.103 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 03:43:45.177657  507339 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0116 03:43:45.177719  507339 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 03:43:45.221757  507339 cri.go:89] found id: ""
	I0116 03:43:45.221848  507339 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0116 03:43:45.233811  507339 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0116 03:43:45.233838  507339 kubeadm.go:636] restartCluster start
	I0116 03:43:45.233906  507339 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0116 03:43:45.244810  507339 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:45.245999  507339 kubeconfig.go:92] found "no-preload-666547" server: "https://192.168.39.103:8443"
	I0116 03:43:45.248711  507339 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0116 03:43:45.260979  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:45.261066  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:45.276682  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:48.709239  507889 start.go:369] acquired machines lock for "default-k8s-diff-port-434445" in 3m31.985691976s
	I0116 03:43:48.709311  507889 start.go:96] Skipping create...Using existing machine configuration
	I0116 03:43:48.709333  507889 fix.go:54] fixHost starting: 
	I0116 03:43:48.709815  507889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:43:48.709867  507889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:43:48.726637  507889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45373
	I0116 03:43:48.727122  507889 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:43:48.727702  507889 main.go:141] libmachine: Using API Version  1
	I0116 03:43:48.727737  507889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:43:48.728104  507889 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:43:48.728310  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .DriverName
	I0116 03:43:48.728475  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetState
	I0116 03:43:48.730338  507889 fix.go:102] recreateIfNeeded on default-k8s-diff-port-434445: state=Stopped err=<nil>
	I0116 03:43:48.730361  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .DriverName
	W0116 03:43:48.730545  507889 fix.go:128] unexpected machine state, will restart: <nil>
	I0116 03:43:48.733848  507889 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-434445" ...
	I0116 03:43:47.512288  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:47.512755  507510 main.go:141] libmachine: (old-k8s-version-696770) Found IP for machine: 192.168.61.167
	I0116 03:43:47.512793  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has current primary IP address 192.168.61.167 and MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:47.512804  507510 main.go:141] libmachine: (old-k8s-version-696770) Reserving static IP address...
	I0116 03:43:47.513157  507510 main.go:141] libmachine: (old-k8s-version-696770) Reserved static IP address: 192.168.61.167
	I0116 03:43:47.513194  507510 main.go:141] libmachine: (old-k8s-version-696770) Waiting for SSH to be available...
	I0116 03:43:47.513218  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | found host DHCP lease matching {name: "old-k8s-version-696770", mac: "52:54:00:37:20:1a", ip: "192.168.61.167"} in network mk-old-k8s-version-696770: {Iface:virbr3 ExpiryTime:2024-01-16 04:43:38 +0000 UTC Type:0 Mac:52:54:00:37:20:1a Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:old-k8s-version-696770 Clientid:01:52:54:00:37:20:1a}
	I0116 03:43:47.513242  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | skip adding static IP to network mk-old-k8s-version-696770 - found existing host DHCP lease matching {name: "old-k8s-version-696770", mac: "52:54:00:37:20:1a", ip: "192.168.61.167"}
	I0116 03:43:47.513259  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | Getting to WaitForSSH function...
	I0116 03:43:47.515438  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:47.515887  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:20:1a", ip: ""} in network mk-old-k8s-version-696770: {Iface:virbr3 ExpiryTime:2024-01-16 04:43:38 +0000 UTC Type:0 Mac:52:54:00:37:20:1a Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:old-k8s-version-696770 Clientid:01:52:54:00:37:20:1a}
	I0116 03:43:47.515928  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined IP address 192.168.61.167 and MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:47.516089  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | Using SSH client type: external
	I0116 03:43:47.516124  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | Using SSH private key: /home/jenkins/minikube-integration/17965-468241/.minikube/machines/old-k8s-version-696770/id_rsa (-rw-------)
	I0116 03:43:47.516160  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.167 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17965-468241/.minikube/machines/old-k8s-version-696770/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0116 03:43:47.516182  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | About to run SSH command:
	I0116 03:43:47.516203  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | exit 0
	I0116 03:43:47.608193  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | SSH cmd err, output: <nil>: 
	I0116 03:43:47.608599  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetConfigRaw
	I0116 03:43:47.609195  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetIP
	I0116 03:43:47.611633  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:47.612018  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:20:1a", ip: ""} in network mk-old-k8s-version-696770: {Iface:virbr3 ExpiryTime:2024-01-16 04:43:38 +0000 UTC Type:0 Mac:52:54:00:37:20:1a Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:old-k8s-version-696770 Clientid:01:52:54:00:37:20:1a}
	I0116 03:43:47.612068  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined IP address 192.168.61.167 and MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:47.612355  507510 profile.go:148] Saving config to /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/old-k8s-version-696770/config.json ...
	I0116 03:43:47.612601  507510 machine.go:88] provisioning docker machine ...
	I0116 03:43:47.612628  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .DriverName
	I0116 03:43:47.612872  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetMachineName
	I0116 03:43:47.613047  507510 buildroot.go:166] provisioning hostname "old-k8s-version-696770"
	I0116 03:43:47.613068  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetMachineName
	I0116 03:43:47.613195  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHHostname
	I0116 03:43:47.615457  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:47.615901  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:20:1a", ip: ""} in network mk-old-k8s-version-696770: {Iface:virbr3 ExpiryTime:2024-01-16 04:43:38 +0000 UTC Type:0 Mac:52:54:00:37:20:1a Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:old-k8s-version-696770 Clientid:01:52:54:00:37:20:1a}
	I0116 03:43:47.615928  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined IP address 192.168.61.167 and MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:47.616111  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHPort
	I0116 03:43:47.616292  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHKeyPath
	I0116 03:43:47.616489  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHKeyPath
	I0116 03:43:47.616687  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHUsername
	I0116 03:43:47.616889  507510 main.go:141] libmachine: Using SSH client type: native
	I0116 03:43:47.617280  507510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.167 22 <nil> <nil>}
	I0116 03:43:47.617297  507510 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-696770 && echo "old-k8s-version-696770" | sudo tee /etc/hostname
	I0116 03:43:47.745448  507510 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-696770
	
	I0116 03:43:47.745482  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHHostname
	I0116 03:43:47.748812  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:47.749135  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:20:1a", ip: ""} in network mk-old-k8s-version-696770: {Iface:virbr3 ExpiryTime:2024-01-16 04:43:38 +0000 UTC Type:0 Mac:52:54:00:37:20:1a Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:old-k8s-version-696770 Clientid:01:52:54:00:37:20:1a}
	I0116 03:43:47.749171  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined IP address 192.168.61.167 and MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:47.749296  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHPort
	I0116 03:43:47.749525  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHKeyPath
	I0116 03:43:47.749715  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHKeyPath
	I0116 03:43:47.749872  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHUsername
	I0116 03:43:47.750019  507510 main.go:141] libmachine: Using SSH client type: native
	I0116 03:43:47.750339  507510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.167 22 <nil> <nil>}
	I0116 03:43:47.750357  507510 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-696770' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-696770/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-696770' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 03:43:47.876917  507510 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 03:43:47.876957  507510 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17965-468241/.minikube CaCertPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17965-468241/.minikube}
	I0116 03:43:47.877011  507510 buildroot.go:174] setting up certificates
	I0116 03:43:47.877026  507510 provision.go:83] configureAuth start
	I0116 03:43:47.877041  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetMachineName
	I0116 03:43:47.877378  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetIP
	I0116 03:43:47.880453  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:47.880836  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:20:1a", ip: ""} in network mk-old-k8s-version-696770: {Iface:virbr3 ExpiryTime:2024-01-16 04:43:38 +0000 UTC Type:0 Mac:52:54:00:37:20:1a Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:old-k8s-version-696770 Clientid:01:52:54:00:37:20:1a}
	I0116 03:43:47.880869  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined IP address 192.168.61.167 and MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:47.881010  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHHostname
	I0116 03:43:47.883053  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:47.883415  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:20:1a", ip: ""} in network mk-old-k8s-version-696770: {Iface:virbr3 ExpiryTime:2024-01-16 04:43:38 +0000 UTC Type:0 Mac:52:54:00:37:20:1a Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:old-k8s-version-696770 Clientid:01:52:54:00:37:20:1a}
	I0116 03:43:47.883448  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined IP address 192.168.61.167 and MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:47.883635  507510 provision.go:138] copyHostCerts
	I0116 03:43:47.883706  507510 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem, removing ...
	I0116 03:43:47.883717  507510 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem
	I0116 03:43:47.883778  507510 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem (1078 bytes)
	I0116 03:43:47.883864  507510 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem, removing ...
	I0116 03:43:47.883871  507510 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem
	I0116 03:43:47.883893  507510 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem (1123 bytes)
	I0116 03:43:47.883943  507510 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem, removing ...
	I0116 03:43:47.883950  507510 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem
	I0116 03:43:47.883965  507510 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem (1679 bytes)
	I0116 03:43:47.884010  507510 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-696770 san=[192.168.61.167 192.168.61.167 localhost 127.0.0.1 minikube old-k8s-version-696770]
	I0116 03:43:47.946258  507510 provision.go:172] copyRemoteCerts
	I0116 03:43:47.946327  507510 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 03:43:47.946354  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHHostname
	I0116 03:43:47.949417  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:47.949750  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:20:1a", ip: ""} in network mk-old-k8s-version-696770: {Iface:virbr3 ExpiryTime:2024-01-16 04:43:38 +0000 UTC Type:0 Mac:52:54:00:37:20:1a Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:old-k8s-version-696770 Clientid:01:52:54:00:37:20:1a}
	I0116 03:43:47.949784  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined IP address 192.168.61.167 and MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:47.949941  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHPort
	I0116 03:43:47.950180  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHKeyPath
	I0116 03:43:47.950333  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHUsername
	I0116 03:43:47.950478  507510 sshutil.go:53] new ssh client: &{IP:192.168.61.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/old-k8s-version-696770/id_rsa Username:docker}
	I0116 03:43:48.042564  507510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0116 03:43:48.066519  507510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0116 03:43:48.090127  507510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0116 03:43:48.113387  507510 provision.go:86] duration metric: configureAuth took 236.343393ms
	I0116 03:43:48.113428  507510 buildroot.go:189] setting minikube options for container-runtime
	I0116 03:43:48.113662  507510 config.go:182] Loaded profile config "old-k8s-version-696770": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0116 03:43:48.113758  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHHostname
	I0116 03:43:48.116735  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:48.117144  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:20:1a", ip: ""} in network mk-old-k8s-version-696770: {Iface:virbr3 ExpiryTime:2024-01-16 04:43:38 +0000 UTC Type:0 Mac:52:54:00:37:20:1a Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:old-k8s-version-696770 Clientid:01:52:54:00:37:20:1a}
	I0116 03:43:48.117187  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined IP address 192.168.61.167 and MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:48.117328  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHPort
	I0116 03:43:48.117529  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHKeyPath
	I0116 03:43:48.117725  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHKeyPath
	I0116 03:43:48.117892  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHUsername
	I0116 03:43:48.118118  507510 main.go:141] libmachine: Using SSH client type: native
	I0116 03:43:48.118427  507510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.167 22 <nil> <nil>}
	I0116 03:43:48.118450  507510 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 03:43:48.458094  507510 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 03:43:48.458129  507510 machine.go:91] provisioned docker machine in 845.51167ms
	I0116 03:43:48.458141  507510 start.go:300] post-start starting for "old-k8s-version-696770" (driver="kvm2")
	I0116 03:43:48.458153  507510 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 03:43:48.458172  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .DriverName
	I0116 03:43:48.458616  507510 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 03:43:48.458650  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHHostname
	I0116 03:43:48.461476  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:48.461858  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:20:1a", ip: ""} in network mk-old-k8s-version-696770: {Iface:virbr3 ExpiryTime:2024-01-16 04:43:38 +0000 UTC Type:0 Mac:52:54:00:37:20:1a Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:old-k8s-version-696770 Clientid:01:52:54:00:37:20:1a}
	I0116 03:43:48.461908  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined IP address 192.168.61.167 and MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:48.462029  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHPort
	I0116 03:43:48.462272  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHKeyPath
	I0116 03:43:48.462460  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHUsername
	I0116 03:43:48.462643  507510 sshutil.go:53] new ssh client: &{IP:192.168.61.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/old-k8s-version-696770/id_rsa Username:docker}
	I0116 03:43:48.550436  507510 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 03:43:48.555225  507510 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 03:43:48.555261  507510 filesync.go:126] Scanning /home/jenkins/minikube-integration/17965-468241/.minikube/addons for local assets ...
	I0116 03:43:48.555349  507510 filesync.go:126] Scanning /home/jenkins/minikube-integration/17965-468241/.minikube/files for local assets ...
	I0116 03:43:48.555434  507510 filesync.go:149] local asset: /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem -> 4754782.pem in /etc/ssl/certs
	I0116 03:43:48.555560  507510 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 03:43:48.565598  507510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem --> /etc/ssl/certs/4754782.pem (1708 bytes)
	I0116 03:43:48.588611  507510 start.go:303] post-start completed in 130.45305ms
	I0116 03:43:48.588642  507510 fix.go:56] fixHost completed within 22.411021213s
	I0116 03:43:48.588675  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHHostname
	I0116 03:43:48.591220  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:48.591582  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:20:1a", ip: ""} in network mk-old-k8s-version-696770: {Iface:virbr3 ExpiryTime:2024-01-16 04:43:38 +0000 UTC Type:0 Mac:52:54:00:37:20:1a Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:old-k8s-version-696770 Clientid:01:52:54:00:37:20:1a}
	I0116 03:43:48.591618  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined IP address 192.168.61.167 and MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:48.591779  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHPort
	I0116 03:43:48.592014  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHKeyPath
	I0116 03:43:48.592216  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHKeyPath
	I0116 03:43:48.592412  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHUsername
	I0116 03:43:48.592567  507510 main.go:141] libmachine: Using SSH client type: native
	I0116 03:43:48.592933  507510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.167 22 <nil> <nil>}
	I0116 03:43:48.592950  507510 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0116 03:43:48.709079  507510 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705376628.651647278
	
	I0116 03:43:48.709103  507510 fix.go:206] guest clock: 1705376628.651647278
	I0116 03:43:48.709111  507510 fix.go:219] Guest: 2024-01-16 03:43:48.651647278 +0000 UTC Remote: 2024-01-16 03:43:48.588648172 +0000 UTC m=+299.078902394 (delta=62.999106ms)
	I0116 03:43:48.709134  507510 fix.go:190] guest clock delta is within tolerance: 62.999106ms
	I0116 03:43:48.709140  507510 start.go:83] releasing machines lock for "old-k8s-version-696770", held for 22.531556099s
	I0116 03:43:48.709169  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .DriverName
	I0116 03:43:48.709519  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetIP
	I0116 03:43:48.712438  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:48.712770  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:20:1a", ip: ""} in network mk-old-k8s-version-696770: {Iface:virbr3 ExpiryTime:2024-01-16 04:43:38 +0000 UTC Type:0 Mac:52:54:00:37:20:1a Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:old-k8s-version-696770 Clientid:01:52:54:00:37:20:1a}
	I0116 03:43:48.712825  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined IP address 192.168.61.167 and MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:48.712921  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .DriverName
	I0116 03:43:48.713501  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .DriverName
	I0116 03:43:48.713677  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .DriverName
	I0116 03:43:48.713768  507510 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 03:43:48.713816  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHHostname
	I0116 03:43:48.713920  507510 ssh_runner.go:195] Run: cat /version.json
	I0116 03:43:48.713951  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHHostname
	I0116 03:43:48.716415  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:48.716697  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:48.716820  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:20:1a", ip: ""} in network mk-old-k8s-version-696770: {Iface:virbr3 ExpiryTime:2024-01-16 04:43:38 +0000 UTC Type:0 Mac:52:54:00:37:20:1a Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:old-k8s-version-696770 Clientid:01:52:54:00:37:20:1a}
	I0116 03:43:48.716846  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined IP address 192.168.61.167 and MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:48.716995  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHPort
	I0116 03:43:48.717093  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:20:1a", ip: ""} in network mk-old-k8s-version-696770: {Iface:virbr3 ExpiryTime:2024-01-16 04:43:38 +0000 UTC Type:0 Mac:52:54:00:37:20:1a Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:old-k8s-version-696770 Clientid:01:52:54:00:37:20:1a}
	I0116 03:43:48.717123  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined IP address 192.168.61.167 and MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:48.717394  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHPort
	I0116 03:43:48.717402  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHKeyPath
	I0116 03:43:48.717638  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHKeyPath
	I0116 03:43:48.717650  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHUsername
	I0116 03:43:48.717791  507510 sshutil.go:53] new ssh client: &{IP:192.168.61.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/old-k8s-version-696770/id_rsa Username:docker}
	I0116 03:43:48.717824  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHUsername
	I0116 03:43:48.717956  507510 sshutil.go:53] new ssh client: &{IP:192.168.61.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/old-k8s-version-696770/id_rsa Username:docker}
	I0116 03:43:48.838506  507510 ssh_runner.go:195] Run: systemctl --version
	I0116 03:43:48.845152  507510 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 03:43:49.001791  507510 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0116 03:43:49.008474  507510 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 03:43:49.008558  507510 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 03:43:49.024030  507510 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0116 03:43:49.024087  507510 start.go:475] detecting cgroup driver to use...
	I0116 03:43:49.024164  507510 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 03:43:49.038853  507510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 03:43:49.056228  507510 docker.go:217] disabling cri-docker service (if available) ...
	I0116 03:43:49.056308  507510 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 03:43:49.071266  507510 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 03:43:49.085793  507510 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 03:43:49.211294  507510 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 03:43:49.338893  507510 docker.go:233] disabling docker service ...
	I0116 03:43:49.338971  507510 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 03:43:49.354423  507510 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 03:43:49.367355  507510 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 03:43:49.483277  507510 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 03:43:49.593977  507510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 03:43:49.607374  507510 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 03:43:49.626781  507510 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0116 03:43:49.626846  507510 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:43:49.637809  507510 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 03:43:49.637892  507510 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:43:49.648162  507510 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:43:49.658305  507510 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:43:49.669557  507510 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 03:43:49.680190  507510 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 03:43:49.689125  507510 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0116 03:43:49.689199  507510 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0116 03:43:49.703247  507510 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 03:43:49.713826  507510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 03:43:49.829677  507510 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 03:43:50.009393  507510 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 03:43:50.009489  507510 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 03:43:50.016443  507510 start.go:543] Will wait 60s for crictl version
	I0116 03:43:50.016521  507510 ssh_runner.go:195] Run: which crictl
	I0116 03:43:50.020560  507510 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 03:43:50.056652  507510 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0116 03:43:50.056733  507510 ssh_runner.go:195] Run: crio --version
	I0116 03:43:50.104202  507510 ssh_runner.go:195] Run: crio --version
	I0116 03:43:50.150215  507510 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0116 03:43:45.761989  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:45.762077  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:45.776377  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:46.262107  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:46.262205  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:46.274748  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:46.761344  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:46.761434  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:46.773509  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:47.261093  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:47.261222  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:47.272584  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:47.761119  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:47.761204  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:47.773674  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:48.261288  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:48.261448  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:48.273461  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:48.762071  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:48.762205  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:48.778093  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:49.261032  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:49.261139  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:49.273090  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:49.761233  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:49.761348  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:49.773529  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:50.261720  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:50.261822  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:50.277403  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:48.735627  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .Start
	I0116 03:43:48.735865  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Ensuring networks are active...
	I0116 03:43:48.736708  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Ensuring network default is active
	I0116 03:43:48.737105  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Ensuring network mk-default-k8s-diff-port-434445 is active
	I0116 03:43:48.737445  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Getting domain xml...
	I0116 03:43:48.738086  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Creating domain...
	I0116 03:43:49.085479  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting to get IP...
	I0116 03:43:49.086513  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:43:49.086907  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | unable to find current IP address of domain default-k8s-diff-port-434445 in network mk-default-k8s-diff-port-434445
	I0116 03:43:49.086993  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | I0116 03:43:49.086879  508579 retry.go:31] will retry after 251.682416ms: waiting for machine to come up
	I0116 03:43:49.340560  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:43:49.341196  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | unable to find current IP address of domain default-k8s-diff-port-434445 in network mk-default-k8s-diff-port-434445
	I0116 03:43:49.341235  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | I0116 03:43:49.341140  508579 retry.go:31] will retry after 288.322607ms: waiting for machine to come up
	I0116 03:43:49.630920  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:43:49.631449  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | unable to find current IP address of domain default-k8s-diff-port-434445 in network mk-default-k8s-diff-port-434445
	I0116 03:43:49.631478  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | I0116 03:43:49.631404  508579 retry.go:31] will retry after 305.730946ms: waiting for machine to come up
	I0116 03:43:49.938846  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:43:49.939357  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | unable to find current IP address of domain default-k8s-diff-port-434445 in network mk-default-k8s-diff-port-434445
	I0116 03:43:49.939381  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | I0116 03:43:49.939307  508579 retry.go:31] will retry after 431.952943ms: waiting for machine to come up
	I0116 03:43:50.372921  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:43:50.373426  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | unable to find current IP address of domain default-k8s-diff-port-434445 in network mk-default-k8s-diff-port-434445
	I0116 03:43:50.373453  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | I0116 03:43:50.373368  508579 retry.go:31] will retry after 557.336026ms: waiting for machine to come up
	I0116 03:43:50.932300  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:43:50.932902  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | unable to find current IP address of domain default-k8s-diff-port-434445 in network mk-default-k8s-diff-port-434445
	I0116 03:43:50.932933  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | I0116 03:43:50.932837  508579 retry.go:31] will retry after 652.034162ms: waiting for machine to come up
	I0116 03:43:51.586765  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:43:51.587332  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | unable to find current IP address of domain default-k8s-diff-port-434445 in network mk-default-k8s-diff-port-434445
	I0116 03:43:51.587365  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | I0116 03:43:51.587290  508579 retry.go:31] will retry after 1.078418867s: waiting for machine to come up
	I0116 03:43:50.151763  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetIP
	I0116 03:43:50.154861  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:50.155283  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:20:1a", ip: ""} in network mk-old-k8s-version-696770: {Iface:virbr3 ExpiryTime:2024-01-16 04:43:38 +0000 UTC Type:0 Mac:52:54:00:37:20:1a Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:old-k8s-version-696770 Clientid:01:52:54:00:37:20:1a}
	I0116 03:43:50.155331  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined IP address 192.168.61.167 and MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:43:50.155536  507510 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0116 03:43:50.160159  507510 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 03:43:50.173354  507510 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0116 03:43:50.173416  507510 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 03:43:50.227220  507510 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0116 03:43:50.227308  507510 ssh_runner.go:195] Run: which lz4
	I0116 03:43:50.231565  507510 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0116 03:43:50.236133  507510 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0116 03:43:50.236169  507510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0116 03:43:52.243584  507510 crio.go:444] Took 2.012049 seconds to copy over tarball
	I0116 03:43:52.243686  507510 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0116 03:43:50.761232  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:50.761323  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:50.777877  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:51.261357  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:51.261444  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:51.280624  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:51.761117  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:51.761225  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:51.775076  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:52.261857  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:52.261948  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:52.279844  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:52.761400  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:52.761493  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:52.773869  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:53.261155  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:53.261263  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:53.273774  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:53.761370  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:53.761500  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:53.773900  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:54.262012  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:54.262134  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:54.277928  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:54.761492  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:54.761642  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:54.774531  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:55.261302  507339 api_server.go:166] Checking apiserver status ...
	I0116 03:43:55.261395  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:55.274178  507339 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:55.274226  507339 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0116 03:43:55.274272  507339 kubeadm.go:1135] stopping kube-system containers ...
	I0116 03:43:55.274293  507339 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0116 03:43:55.274360  507339 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 03:43:55.321847  507339 cri.go:89] found id: ""
	I0116 03:43:55.321943  507339 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0116 03:43:55.339190  507339 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 03:43:55.348548  507339 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 03:43:55.348637  507339 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 03:43:55.358316  507339 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0116 03:43:55.358345  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:43:55.492932  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:43:52.667882  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:43:52.668380  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | unable to find current IP address of domain default-k8s-diff-port-434445 in network mk-default-k8s-diff-port-434445
	I0116 03:43:52.668415  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | I0116 03:43:52.668311  508579 retry.go:31] will retry after 1.052441827s: waiting for machine to come up
	I0116 03:43:53.722859  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:43:53.723473  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | unable to find current IP address of domain default-k8s-diff-port-434445 in network mk-default-k8s-diff-port-434445
	I0116 03:43:53.723503  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | I0116 03:43:53.723429  508579 retry.go:31] will retry after 1.233090848s: waiting for machine to come up
	I0116 03:43:54.958519  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:43:54.958990  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | unable to find current IP address of domain default-k8s-diff-port-434445 in network mk-default-k8s-diff-port-434445
	I0116 03:43:54.959014  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | I0116 03:43:54.958934  508579 retry.go:31] will retry after 2.038449182s: waiting for machine to come up
	I0116 03:43:55.109598  507510 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.865872133s)
	I0116 03:43:55.109637  507510 crio.go:451] Took 2.866019 seconds to extract the tarball
	I0116 03:43:55.109652  507510 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0116 03:43:55.150763  507510 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 03:43:55.206497  507510 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0116 03:43:55.206525  507510 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0116 03:43:55.206597  507510 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:43:55.206619  507510 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0116 03:43:55.206660  507510 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0116 03:43:55.206682  507510 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0116 03:43:55.206601  507510 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0116 03:43:55.206622  507510 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0116 03:43:55.206790  507510 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0116 03:43:55.206801  507510 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0116 03:43:55.208228  507510 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:43:55.208246  507510 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0116 03:43:55.208245  507510 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0116 03:43:55.208247  507510 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0116 03:43:55.208291  507510 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0116 03:43:55.208295  507510 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0116 03:43:55.208291  507510 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0116 03:43:55.208610  507510 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0116 03:43:55.364082  507510 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0116 03:43:55.364096  507510 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0116 03:43:55.367820  507510 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0116 03:43:55.371639  507510 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0116 03:43:55.379423  507510 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0116 03:43:55.383569  507510 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0116 03:43:55.385854  507510 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0116 03:43:55.522241  507510 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:43:55.539971  507510 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0116 03:43:55.540031  507510 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0116 03:43:55.540113  507510 ssh_runner.go:195] Run: which crictl
	I0116 03:43:55.542332  507510 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0116 03:43:55.542389  507510 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0116 03:43:55.542441  507510 ssh_runner.go:195] Run: which crictl
	I0116 03:43:55.565552  507510 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0116 03:43:55.565679  507510 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0116 03:43:55.565761  507510 ssh_runner.go:195] Run: which crictl
	I0116 03:43:55.583839  507510 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0116 03:43:55.583890  507510 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0116 03:43:55.583942  507510 ssh_runner.go:195] Run: which crictl
	I0116 03:43:55.583847  507510 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0116 03:43:55.584023  507510 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0116 03:43:55.584073  507510 ssh_runner.go:195] Run: which crictl
	I0116 03:43:55.596487  507510 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0116 03:43:55.596555  507510 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0116 03:43:55.596619  507510 ssh_runner.go:195] Run: which crictl
	I0116 03:43:55.605042  507510 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0116 03:43:55.605105  507510 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0116 03:43:55.605162  507510 ssh_runner.go:195] Run: which crictl
	I0116 03:43:55.740186  507510 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0116 03:43:55.740225  507510 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0116 03:43:55.740283  507510 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0116 03:43:55.740334  507510 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0116 03:43:55.740384  507510 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0116 03:43:55.740441  507510 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0116 03:43:55.740450  507510 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0116 03:43:55.900542  507510 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0116 03:43:55.906506  507510 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0116 03:43:55.914158  507510 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0116 03:43:55.914171  507510 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0116 03:43:55.926953  507510 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0116 03:43:55.927034  507510 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0116 03:43:55.927137  507510 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0116 03:43:55.927186  507510 cache_images.go:92] LoadImages completed in 720.646435ms
	W0116 03:43:55.927280  507510 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17965-468241/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0: no such file or directory
	I0116 03:43:55.927362  507510 ssh_runner.go:195] Run: crio config
	I0116 03:43:55.989408  507510 cni.go:84] Creating CNI manager for ""
	I0116 03:43:55.989440  507510 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:43:55.989468  507510 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 03:43:55.989495  507510 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.167 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-696770 NodeName:old-k8s-version-696770 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.167"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.167 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0116 03:43:55.989657  507510 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.167
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-696770"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.167
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.167"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-696770
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.61.167:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 03:43:55.989757  507510 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-696770 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.167
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-696770 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0116 03:43:55.989819  507510 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0116 03:43:55.999676  507510 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 03:43:55.999766  507510 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0116 03:43:56.009179  507510 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0116 03:43:56.028479  507510 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0116 03:43:56.045979  507510 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2181 bytes)
	I0116 03:43:56.067179  507510 ssh_runner.go:195] Run: grep 192.168.61.167	control-plane.minikube.internal$ /etc/hosts
	I0116 03:43:56.071532  507510 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.167	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 03:43:56.085960  507510 certs.go:56] Setting up /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/old-k8s-version-696770 for IP: 192.168.61.167
	I0116 03:43:56.086006  507510 certs.go:190] acquiring lock for shared ca certs: {Name:mkab5aeea3e0ed69f3592dc8f418e07c6075b130 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:43:56.086216  507510 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17965-468241/.minikube/ca.key
	I0116 03:43:56.086293  507510 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.key
	I0116 03:43:56.086385  507510 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/old-k8s-version-696770/client.key
	I0116 03:43:56.086447  507510 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/old-k8s-version-696770/apiserver.key.1a2d2382
	I0116 03:43:56.086480  507510 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/old-k8s-version-696770/proxy-client.key
	I0116 03:43:56.086668  507510 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478.pem (1338 bytes)
	W0116 03:43:56.086711  507510 certs.go:433] ignoring /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478_empty.pem, impossibly tiny 0 bytes
	I0116 03:43:56.086721  507510 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca-key.pem (1679 bytes)
	I0116 03:43:56.086746  507510 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem (1078 bytes)
	I0116 03:43:56.086772  507510 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem (1123 bytes)
	I0116 03:43:56.086795  507510 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem (1679 bytes)
	I0116 03:43:56.086833  507510 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem (1708 bytes)
	I0116 03:43:56.087557  507510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/old-k8s-version-696770/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0116 03:43:56.118148  507510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/old-k8s-version-696770/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0116 03:43:56.146632  507510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/old-k8s-version-696770/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0116 03:43:56.177146  507510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/old-k8s-version-696770/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0116 03:43:56.208800  507510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 03:43:56.237097  507510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0116 03:43:56.264559  507510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 03:43:56.294383  507510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 03:43:56.323966  507510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478.pem --> /usr/share/ca-certificates/475478.pem (1338 bytes)
	I0116 03:43:56.350120  507510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem --> /usr/share/ca-certificates/4754782.pem (1708 bytes)
	I0116 03:43:56.379523  507510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 03:43:56.406312  507510 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0116 03:43:56.426149  507510 ssh_runner.go:195] Run: openssl version
	I0116 03:43:56.432150  507510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 03:43:56.443200  507510 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:43:56.448268  507510 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 02:35 /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:43:56.448343  507510 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:43:56.454227  507510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 03:43:56.464467  507510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/475478.pem && ln -fs /usr/share/ca-certificates/475478.pem /etc/ssl/certs/475478.pem"
	I0116 03:43:56.474769  507510 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/475478.pem
	I0116 03:43:56.480143  507510 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 02:44 /usr/share/ca-certificates/475478.pem
	I0116 03:43:56.480228  507510 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/475478.pem
	I0116 03:43:56.487996  507510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/475478.pem /etc/ssl/certs/51391683.0"
	I0116 03:43:56.501097  507510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4754782.pem && ln -fs /usr/share/ca-certificates/4754782.pem /etc/ssl/certs/4754782.pem"
	I0116 03:43:56.513266  507510 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4754782.pem
	I0116 03:43:56.518806  507510 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 02:44 /usr/share/ca-certificates/4754782.pem
	I0116 03:43:56.518891  507510 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4754782.pem
	I0116 03:43:56.527891  507510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4754782.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 03:43:56.538719  507510 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 03:43:56.544298  507510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0116 03:43:56.551048  507510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0116 03:43:56.557847  507510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0116 03:43:56.567757  507510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0116 03:43:56.575977  507510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0116 03:43:56.584514  507510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0116 03:43:56.593191  507510 kubeadm.go:404] StartCluster: {Name:old-k8s-version-696770 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-696770 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.167 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 03:43:56.593333  507510 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0116 03:43:56.593408  507510 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 03:43:56.653791  507510 cri.go:89] found id: ""
	I0116 03:43:56.653899  507510 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0116 03:43:56.667037  507510 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0116 03:43:56.667078  507510 kubeadm.go:636] restartCluster start
	I0116 03:43:56.667164  507510 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0116 03:43:56.679734  507510 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:56.681241  507510 kubeconfig.go:92] found "old-k8s-version-696770" server: "https://192.168.61.167:8443"
	I0116 03:43:56.683942  507510 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0116 03:43:56.696409  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:43:56.696507  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:56.713120  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:57.196652  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:43:57.196826  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:57.213992  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:57.697096  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:43:57.697197  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:57.709671  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:58.197291  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:43:58.197401  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:58.214351  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:58.696893  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:43:58.697036  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:58.714549  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:59.197173  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:43:59.197304  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:59.213885  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:43:56.773238  507339 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.280261968s)
	I0116 03:43:56.773267  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:43:57.046716  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:43:57.123831  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:43:57.221179  507339 api_server.go:52] waiting for apiserver process to appear ...
	I0116 03:43:57.221300  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:43:57.721940  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:43:58.222437  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:43:58.722256  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:43:59.222191  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:43:59.721451  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:43:59.753520  507339 api_server.go:72] duration metric: took 2.532341035s to wait for apiserver process to appear ...
	I0116 03:43:59.753556  507339 api_server.go:88] waiting for apiserver healthz status ...
	I0116 03:43:59.753601  507339 api_server.go:253] Checking apiserver healthz at https://192.168.39.103:8443/healthz ...
	I0116 03:43:59.754176  507339 api_server.go:269] stopped: https://192.168.39.103:8443/healthz: Get "https://192.168.39.103:8443/healthz": dial tcp 192.168.39.103:8443: connect: connection refused
	I0116 03:44:00.253773  507339 api_server.go:253] Checking apiserver healthz at https://192.168.39.103:8443/healthz ...
	I0116 03:43:57.000501  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:43:57.070966  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | unable to find current IP address of domain default-k8s-diff-port-434445 in network mk-default-k8s-diff-port-434445
	I0116 03:43:57.071015  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | I0116 03:43:57.000987  508579 retry.go:31] will retry after 1.963105502s: waiting for machine to come up
	I0116 03:43:58.966528  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:43:58.967131  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | unable to find current IP address of domain default-k8s-diff-port-434445 in network mk-default-k8s-diff-port-434445
	I0116 03:43:58.967173  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | I0116 03:43:58.967069  508579 retry.go:31] will retry after 2.871455928s: waiting for machine to come up
	I0116 03:43:59.697215  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:43:59.697303  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:43:59.713992  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:00.196535  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:44:00.196649  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:00.212663  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:00.697276  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:44:00.697390  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:00.714622  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:01.197125  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:44:01.197242  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:01.214976  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:01.696506  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:44:01.696612  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:01.708204  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:02.197402  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:44:02.197519  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:02.211062  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:02.697230  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:44:02.697358  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:02.710340  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:03.196949  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:44:03.197047  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:03.213169  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:03.696657  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:44:03.696793  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:03.709422  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:04.196970  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:44:04.197083  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:04.209280  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:03.473725  507339 api_server.go:279] https://192.168.39.103:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 03:44:03.473764  507339 api_server.go:103] status: https://192.168.39.103:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 03:44:03.473784  507339 api_server.go:253] Checking apiserver healthz at https://192.168.39.103:8443/healthz ...
	I0116 03:44:03.531825  507339 api_server.go:279] https://192.168.39.103:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 03:44:03.531873  507339 api_server.go:103] status: https://192.168.39.103:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 03:44:03.754148  507339 api_server.go:253] Checking apiserver healthz at https://192.168.39.103:8443/healthz ...
	I0116 03:44:03.759138  507339 api_server.go:279] https://192.168.39.103:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 03:44:03.759171  507339 api_server.go:103] status: https://192.168.39.103:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 03:44:04.254321  507339 api_server.go:253] Checking apiserver healthz at https://192.168.39.103:8443/healthz ...
	I0116 03:44:04.259317  507339 api_server.go:279] https://192.168.39.103:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 03:44:04.259350  507339 api_server.go:103] status: https://192.168.39.103:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 03:44:04.753890  507339 api_server.go:253] Checking apiserver healthz at https://192.168.39.103:8443/healthz ...
	I0116 03:44:04.759714  507339 api_server.go:279] https://192.168.39.103:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 03:44:04.759747  507339 api_server.go:103] status: https://192.168.39.103:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 03:44:05.254582  507339 api_server.go:253] Checking apiserver healthz at https://192.168.39.103:8443/healthz ...
	I0116 03:44:05.264904  507339 api_server.go:279] https://192.168.39.103:8443/healthz returned 200:
	ok
	I0116 03:44:05.283700  507339 api_server.go:141] control plane version: v1.29.0-rc.2
	I0116 03:44:05.283737  507339 api_server.go:131] duration metric: took 5.53017208s to wait for apiserver health ...
	I0116 03:44:05.283749  507339 cni.go:84] Creating CNI manager for ""
	I0116 03:44:05.283757  507339 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:44:05.285715  507339 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 03:44:05.287393  507339 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 03:44:05.327883  507339 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 03:44:05.371856  507339 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 03:44:05.382614  507339 system_pods.go:59] 8 kube-system pods found
	I0116 03:44:05.382656  507339 system_pods.go:61] "coredns-76f75df574-lr95b" [15dc0b11-f7ec-4729-bbfa-79b9649fbad6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0116 03:44:05.382666  507339 system_pods.go:61] "etcd-no-preload-666547" [f98dcc57-5869-4a35-b443-2e6c9beddd88] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0116 03:44:05.382682  507339 system_pods.go:61] "kube-apiserver-no-preload-666547" [3aceae9d-224a-4e8c-a02d-ad4199d8d558] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0116 03:44:05.382699  507339 system_pods.go:61] "kube-controller-manager-no-preload-666547" [af39c89c-3b41-4d6f-b075-16de04a3ecc0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0116 03:44:05.382706  507339 system_pods.go:61] "kube-proxy-dcmrn" [1e91c96f-cbc5-424d-a09e-06e34bf7a2e2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0116 03:44:05.382714  507339 system_pods.go:61] "kube-scheduler-no-preload-666547" [0f2aa713-aebc-4858-a348-32169021235e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0116 03:44:05.382723  507339 system_pods.go:61] "metrics-server-57f55c9bc5-78vfj" [dbd2d3b2-ec0f-4253-8549-7c4299522c37] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:44:05.382735  507339 system_pods.go:61] "storage-provisioner" [f4e1ba45-217d-41d5-b583-2f60044879bc] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0116 03:44:05.382749  507339 system_pods.go:74] duration metric: took 10.858851ms to wait for pod list to return data ...
	I0116 03:44:05.382760  507339 node_conditions.go:102] verifying NodePressure condition ...
	I0116 03:44:05.391050  507339 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 03:44:05.391112  507339 node_conditions.go:123] node cpu capacity is 2
	I0116 03:44:05.391128  507339 node_conditions.go:105] duration metric: took 8.361426ms to run NodePressure ...
	I0116 03:44:05.391152  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:44:01.840907  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:01.841317  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | unable to find current IP address of domain default-k8s-diff-port-434445 in network mk-default-k8s-diff-port-434445
	I0116 03:44:01.841361  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | I0116 03:44:01.841259  508579 retry.go:31] will retry after 3.769759015s: waiting for machine to come up
	I0116 03:44:05.613594  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:05.614119  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | unable to find current IP address of domain default-k8s-diff-port-434445 in network mk-default-k8s-diff-port-434445
	I0116 03:44:05.614149  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | I0116 03:44:05.614062  508579 retry.go:31] will retry after 3.5833584s: waiting for machine to come up
	I0116 03:44:05.740205  507339 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0116 03:44:05.745269  507339 kubeadm.go:787] kubelet initialised
	I0116 03:44:05.745297  507339 kubeadm.go:788] duration metric: took 5.059802ms waiting for restarted kubelet to initialise ...
	I0116 03:44:05.745306  507339 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:44:05.751403  507339 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-lr95b" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:05.761740  507339 pod_ready.go:97] node "no-preload-666547" hosting pod "coredns-76f75df574-lr95b" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-666547" has status "Ready":"False"
	I0116 03:44:05.761784  507339 pod_ready.go:81] duration metric: took 10.344994ms waiting for pod "coredns-76f75df574-lr95b" in "kube-system" namespace to be "Ready" ...
	E0116 03:44:05.761796  507339 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-666547" hosting pod "coredns-76f75df574-lr95b" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-666547" has status "Ready":"False"
	I0116 03:44:05.761812  507339 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-666547" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:05.767627  507339 pod_ready.go:97] node "no-preload-666547" hosting pod "etcd-no-preload-666547" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-666547" has status "Ready":"False"
	I0116 03:44:05.767657  507339 pod_ready.go:81] duration metric: took 5.831478ms waiting for pod "etcd-no-preload-666547" in "kube-system" namespace to be "Ready" ...
	E0116 03:44:05.767669  507339 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-666547" hosting pod "etcd-no-preload-666547" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-666547" has status "Ready":"False"
	I0116 03:44:05.767677  507339 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-666547" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:05.772833  507339 pod_ready.go:97] node "no-preload-666547" hosting pod "kube-apiserver-no-preload-666547" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-666547" has status "Ready":"False"
	I0116 03:44:05.772863  507339 pod_ready.go:81] duration metric: took 5.17797ms waiting for pod "kube-apiserver-no-preload-666547" in "kube-system" namespace to be "Ready" ...
	E0116 03:44:05.772876  507339 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-666547" hosting pod "kube-apiserver-no-preload-666547" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-666547" has status "Ready":"False"
	I0116 03:44:05.772884  507339 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-666547" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:05.779234  507339 pod_ready.go:97] node "no-preload-666547" hosting pod "kube-controller-manager-no-preload-666547" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-666547" has status "Ready":"False"
	I0116 03:44:05.779259  507339 pod_ready.go:81] duration metric: took 6.362264ms waiting for pod "kube-controller-manager-no-preload-666547" in "kube-system" namespace to be "Ready" ...
	E0116 03:44:05.779270  507339 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-666547" hosting pod "kube-controller-manager-no-preload-666547" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-666547" has status "Ready":"False"
	I0116 03:44:05.779277  507339 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-dcmrn" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:06.175807  507339 pod_ready.go:97] node "no-preload-666547" hosting pod "kube-proxy-dcmrn" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-666547" has status "Ready":"False"
	I0116 03:44:06.175846  507339 pod_ready.go:81] duration metric: took 396.551923ms waiting for pod "kube-proxy-dcmrn" in "kube-system" namespace to be "Ready" ...
	E0116 03:44:06.175859  507339 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-666547" hosting pod "kube-proxy-dcmrn" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-666547" has status "Ready":"False"
	I0116 03:44:06.175867  507339 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-666547" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:06.580068  507339 pod_ready.go:97] node "no-preload-666547" hosting pod "kube-scheduler-no-preload-666547" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-666547" has status "Ready":"False"
	I0116 03:44:06.580102  507339 pod_ready.go:81] duration metric: took 404.226447ms waiting for pod "kube-scheduler-no-preload-666547" in "kube-system" namespace to be "Ready" ...
	E0116 03:44:06.580119  507339 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-666547" hosting pod "kube-scheduler-no-preload-666547" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-666547" has status "Ready":"False"
	I0116 03:44:06.580128  507339 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:06.976542  507339 pod_ready.go:97] node "no-preload-666547" hosting pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-666547" has status "Ready":"False"
	I0116 03:44:06.976573  507339 pod_ready.go:81] duration metric: took 396.432925ms waiting for pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace to be "Ready" ...
	E0116 03:44:06.976590  507339 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-666547" hosting pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-666547" has status "Ready":"False"
	I0116 03:44:06.976596  507339 pod_ready.go:38] duration metric: took 1.231281598s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:44:06.976621  507339 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0116 03:44:06.988884  507339 ops.go:34] apiserver oom_adj: -16
	I0116 03:44:06.988916  507339 kubeadm.go:640] restartCluster took 21.755069193s
	I0116 03:44:06.988940  507339 kubeadm.go:406] StartCluster complete in 21.811388098s
	I0116 03:44:06.988970  507339 settings.go:142] acquiring lock: {Name:mk3ded99c06d285b174e76622bb0741c8dd2d8fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:44:06.989066  507339 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17965-468241/kubeconfig
	I0116 03:44:06.990912  507339 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-468241/kubeconfig: {Name:mkac3c63bbcba9b4fa1bea3480797757d703fb9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:44:06.991191  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0116 03:44:06.991241  507339 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0116 03:44:06.991341  507339 addons.go:69] Setting storage-provisioner=true in profile "no-preload-666547"
	I0116 03:44:06.991362  507339 addons.go:234] Setting addon storage-provisioner=true in "no-preload-666547"
	I0116 03:44:06.991364  507339 addons.go:69] Setting default-storageclass=true in profile "no-preload-666547"
	W0116 03:44:06.991370  507339 addons.go:243] addon storage-provisioner should already be in state true
	I0116 03:44:06.991388  507339 addons.go:69] Setting metrics-server=true in profile "no-preload-666547"
	I0116 03:44:06.991397  507339 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-666547"
	I0116 03:44:06.991404  507339 addons.go:234] Setting addon metrics-server=true in "no-preload-666547"
	W0116 03:44:06.991412  507339 addons.go:243] addon metrics-server should already be in state true
	I0116 03:44:06.991438  507339 host.go:66] Checking if "no-preload-666547" exists ...
	I0116 03:44:06.991451  507339 host.go:66] Checking if "no-preload-666547" exists ...
	I0116 03:44:06.991460  507339 config.go:182] Loaded profile config "no-preload-666547": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0116 03:44:06.991855  507339 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:44:06.991855  507339 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:44:06.991893  507339 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:44:06.991858  507339 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:44:06.991940  507339 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:44:06.991976  507339 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:44:06.998037  507339 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-666547" context rescaled to 1 replicas
	I0116 03:44:06.998086  507339 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.103 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 03:44:07.000312  507339 out.go:177] * Verifying Kubernetes components...
	I0116 03:44:07.001889  507339 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:44:07.009057  507339 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41459
	I0116 03:44:07.009097  507339 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38693
	I0116 03:44:07.009596  507339 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:44:07.009735  507339 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:44:07.010178  507339 main.go:141] libmachine: Using API Version  1
	I0116 03:44:07.010195  507339 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:44:07.010368  507339 main.go:141] libmachine: Using API Version  1
	I0116 03:44:07.010392  507339 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:44:07.010412  507339 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33905
	I0116 03:44:07.010763  507339 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:44:07.010822  507339 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:44:07.010829  507339 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:44:07.010945  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetState
	I0116 03:44:07.011314  507339 main.go:141] libmachine: Using API Version  1
	I0116 03:44:07.011346  507339 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:44:07.011955  507339 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:44:07.011956  507339 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:44:07.012052  507339 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:44:07.012511  507339 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:44:07.012547  507339 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:44:07.015214  507339 addons.go:234] Setting addon default-storageclass=true in "no-preload-666547"
	W0116 03:44:07.015237  507339 addons.go:243] addon default-storageclass should already be in state true
	I0116 03:44:07.015269  507339 host.go:66] Checking if "no-preload-666547" exists ...
	I0116 03:44:07.015718  507339 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:44:07.015772  507339 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:44:07.029747  507339 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32807
	I0116 03:44:07.029990  507339 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35757
	I0116 03:44:07.030392  507339 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:44:07.030448  507339 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:44:07.030948  507339 main.go:141] libmachine: Using API Version  1
	I0116 03:44:07.030970  507339 main.go:141] libmachine: Using API Version  1
	I0116 03:44:07.030986  507339 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:44:07.031046  507339 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:44:07.031393  507339 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:44:07.031443  507339 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:44:07.031603  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetState
	I0116 03:44:07.031660  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetState
	I0116 03:44:07.033898  507339 main.go:141] libmachine: (no-preload-666547) Calling .DriverName
	I0116 03:44:07.033990  507339 main.go:141] libmachine: (no-preload-666547) Calling .DriverName
	I0116 03:44:07.036581  507339 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0116 03:44:07.034407  507339 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38687
	I0116 03:44:07.038382  507339 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0116 03:44:07.038420  507339 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0116 03:44:07.038444  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHHostname
	I0116 03:44:07.038499  507339 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:44:07.040190  507339 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 03:44:07.040211  507339 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0116 03:44:07.040232  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHHostname
	I0116 03:44:07.039010  507339 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:44:07.040908  507339 main.go:141] libmachine: Using API Version  1
	I0116 03:44:07.040931  507339 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:44:07.041538  507339 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:44:07.042268  507339 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:44:07.042319  507339 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:44:07.043270  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:44:07.043665  507339 main.go:141] libmachine: (no-preload-666547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:5f:03", ip: ""} in network mk-no-preload-666547: {Iface:virbr2 ExpiryTime:2024-01-16 04:43:15 +0000 UTC Type:0 Mac:52:54:00:4e:5f:03 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:no-preload-666547 Clientid:01:52:54:00:4e:5f:03}
	I0116 03:44:07.043697  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined IP address 192.168.39.103 and MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:44:07.043730  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:44:07.043966  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHPort
	I0116 03:44:07.044196  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHKeyPath
	I0116 03:44:07.044381  507339 main.go:141] libmachine: (no-preload-666547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:5f:03", ip: ""} in network mk-no-preload-666547: {Iface:virbr2 ExpiryTime:2024-01-16 04:43:15 +0000 UTC Type:0 Mac:52:54:00:4e:5f:03 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:no-preload-666547 Clientid:01:52:54:00:4e:5f:03}
	I0116 03:44:07.044422  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined IP address 192.168.39.103 and MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:44:07.044456  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHUsername
	I0116 03:44:07.044566  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHPort
	I0116 03:44:07.044691  507339 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/no-preload-666547/id_rsa Username:docker}
	I0116 03:44:07.044716  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHKeyPath
	I0116 03:44:07.044878  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHUsername
	I0116 03:44:07.045028  507339 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/no-preload-666547/id_rsa Username:docker}
	I0116 03:44:07.084507  507339 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46191
	I0116 03:44:07.085014  507339 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:44:07.085601  507339 main.go:141] libmachine: Using API Version  1
	I0116 03:44:07.085636  507339 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:44:07.086005  507339 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:44:07.086202  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetState
	I0116 03:44:07.088199  507339 main.go:141] libmachine: (no-preload-666547) Calling .DriverName
	I0116 03:44:07.088513  507339 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0116 03:44:07.088532  507339 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0116 03:44:07.088555  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHHostname
	I0116 03:44:07.092194  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:44:07.092719  507339 main.go:141] libmachine: (no-preload-666547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:5f:03", ip: ""} in network mk-no-preload-666547: {Iface:virbr2 ExpiryTime:2024-01-16 04:43:15 +0000 UTC Type:0 Mac:52:54:00:4e:5f:03 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:no-preload-666547 Clientid:01:52:54:00:4e:5f:03}
	I0116 03:44:07.092745  507339 main.go:141] libmachine: (no-preload-666547) DBG | domain no-preload-666547 has defined IP address 192.168.39.103 and MAC address 52:54:00:4e:5f:03 in network mk-no-preload-666547
	I0116 03:44:07.092953  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHPort
	I0116 03:44:07.093219  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHKeyPath
	I0116 03:44:07.093384  507339 main.go:141] libmachine: (no-preload-666547) Calling .GetSSHUsername
	I0116 03:44:07.093590  507339 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/no-preload-666547/id_rsa Username:docker}
	I0116 03:44:07.196191  507339 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0116 03:44:07.196219  507339 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0116 03:44:07.201036  507339 node_ready.go:35] waiting up to 6m0s for node "no-preload-666547" to be "Ready" ...
	I0116 03:44:07.201055  507339 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0116 03:44:07.222924  507339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0116 03:44:07.224548  507339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 03:44:07.237091  507339 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0116 03:44:07.237119  507339 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0116 03:44:07.289312  507339 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 03:44:07.289342  507339 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0116 03:44:07.334708  507339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 03:44:07.583740  507339 main.go:141] libmachine: Making call to close driver server
	I0116 03:44:07.583773  507339 main.go:141] libmachine: (no-preload-666547) Calling .Close
	I0116 03:44:07.584079  507339 main.go:141] libmachine: (no-preload-666547) DBG | Closing plugin on server side
	I0116 03:44:07.584135  507339 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:44:07.584146  507339 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:44:07.584155  507339 main.go:141] libmachine: Making call to close driver server
	I0116 03:44:07.584170  507339 main.go:141] libmachine: (no-preload-666547) Calling .Close
	I0116 03:44:07.584405  507339 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:44:07.584423  507339 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:44:07.592304  507339 main.go:141] libmachine: Making call to close driver server
	I0116 03:44:07.592332  507339 main.go:141] libmachine: (no-preload-666547) Calling .Close
	I0116 03:44:07.592608  507339 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:44:07.592656  507339 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:44:07.592663  507339 main.go:141] libmachine: (no-preload-666547) DBG | Closing plugin on server side
	I0116 03:44:08.290558  507339 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.065965685s)
	I0116 03:44:08.290643  507339 main.go:141] libmachine: Making call to close driver server
	I0116 03:44:08.290665  507339 main.go:141] libmachine: (no-preload-666547) Calling .Close
	I0116 03:44:08.291042  507339 main.go:141] libmachine: (no-preload-666547) DBG | Closing plugin on server side
	I0116 03:44:08.291103  507339 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:44:08.291121  507339 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:44:08.291136  507339 main.go:141] libmachine: Making call to close driver server
	I0116 03:44:08.291147  507339 main.go:141] libmachine: (no-preload-666547) Calling .Close
	I0116 03:44:08.291380  507339 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:44:08.291396  507339 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:44:08.291416  507339 main.go:141] libmachine: (no-preload-666547) DBG | Closing plugin on server side
	I0116 03:44:08.468146  507339 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.133348135s)
	I0116 03:44:08.468223  507339 main.go:141] libmachine: Making call to close driver server
	I0116 03:44:08.468248  507339 main.go:141] libmachine: (no-preload-666547) Calling .Close
	I0116 03:44:08.470360  507339 main.go:141] libmachine: (no-preload-666547) DBG | Closing plugin on server side
	I0116 03:44:08.470367  507339 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:44:08.470397  507339 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:44:08.470412  507339 main.go:141] libmachine: Making call to close driver server
	I0116 03:44:08.470423  507339 main.go:141] libmachine: (no-preload-666547) Calling .Close
	I0116 03:44:08.470734  507339 main.go:141] libmachine: (no-preload-666547) DBG | Closing plugin on server side
	I0116 03:44:08.470749  507339 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:44:08.470764  507339 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:44:08.470776  507339 addons.go:470] Verifying addon metrics-server=true in "no-preload-666547"
	I0116 03:44:08.473092  507339 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0116 03:44:04.697359  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:44:04.697510  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:04.714690  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:05.197225  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:44:05.197333  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:05.213923  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:05.696541  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:44:05.696632  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:05.713744  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:06.197249  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:44:06.197369  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:06.209148  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:06.696967  507510 api_server.go:166] Checking apiserver status ...
	I0116 03:44:06.697083  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:06.709624  507510 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:06.709656  507510 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0116 03:44:06.709665  507510 kubeadm.go:1135] stopping kube-system containers ...
	I0116 03:44:06.709676  507510 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0116 03:44:06.709736  507510 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 03:44:06.753286  507510 cri.go:89] found id: ""
	I0116 03:44:06.753370  507510 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0116 03:44:06.769990  507510 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 03:44:06.781090  507510 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 03:44:06.781168  507510 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 03:44:06.790936  507510 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0116 03:44:06.790971  507510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:44:06.915790  507510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:44:08.112494  507510 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.196668404s)
	I0116 03:44:08.112528  507510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:44:08.328365  507510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:44:08.435410  507510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:44:08.576950  507510 api_server.go:52] waiting for apiserver process to appear ...
	I0116 03:44:08.577077  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:44:09.077263  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:44:08.474544  507339 addons.go:505] enable addons completed in 1.483307386s: enabled=[default-storageclass storage-provisioner metrics-server]
	I0116 03:44:09.206584  507339 node_ready.go:58] node "no-preload-666547" has status "Ready":"False"
	I0116 03:44:10.997580  507257 start.go:369] acquired machines lock for "embed-certs-615980" in 1m2.194717115s
	I0116 03:44:10.997669  507257 start.go:96] Skipping create...Using existing machine configuration
	I0116 03:44:10.997681  507257 fix.go:54] fixHost starting: 
	I0116 03:44:10.998101  507257 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:44:10.998135  507257 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:44:11.017060  507257 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38343
	I0116 03:44:11.017687  507257 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:44:11.018295  507257 main.go:141] libmachine: Using API Version  1
	I0116 03:44:11.018326  507257 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:44:11.018673  507257 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:44:11.018879  507257 main.go:141] libmachine: (embed-certs-615980) Calling .DriverName
	I0116 03:44:11.019056  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetState
	I0116 03:44:11.021360  507257 fix.go:102] recreateIfNeeded on embed-certs-615980: state=Stopped err=<nil>
	I0116 03:44:11.021396  507257 main.go:141] libmachine: (embed-certs-615980) Calling .DriverName
	W0116 03:44:11.021577  507257 fix.go:128] unexpected machine state, will restart: <nil>
	I0116 03:44:11.023462  507257 out.go:177] * Restarting existing kvm2 VM for "embed-certs-615980" ...
	I0116 03:44:11.025158  507257 main.go:141] libmachine: (embed-certs-615980) Calling .Start
	I0116 03:44:11.025397  507257 main.go:141] libmachine: (embed-certs-615980) Ensuring networks are active...
	I0116 03:44:11.026354  507257 main.go:141] libmachine: (embed-certs-615980) Ensuring network default is active
	I0116 03:44:11.026830  507257 main.go:141] libmachine: (embed-certs-615980) Ensuring network mk-embed-certs-615980 is active
	I0116 03:44:11.027263  507257 main.go:141] libmachine: (embed-certs-615980) Getting domain xml...
	I0116 03:44:11.028182  507257 main.go:141] libmachine: (embed-certs-615980) Creating domain...
	I0116 03:44:09.198824  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:09.199284  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Found IP for machine: 192.168.50.236
	I0116 03:44:09.199318  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Reserving static IP address...
	I0116 03:44:09.199348  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has current primary IP address 192.168.50.236 and MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:09.199756  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-434445", mac: "52:54:00:78:ea:d5", ip: "192.168.50.236"} in network mk-default-k8s-diff-port-434445: {Iface:virbr1 ExpiryTime:2024-01-16 04:44:01 +0000 UTC Type:0 Mac:52:54:00:78:ea:d5 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:default-k8s-diff-port-434445 Clientid:01:52:54:00:78:ea:d5}
	I0116 03:44:09.199781  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | skip adding static IP to network mk-default-k8s-diff-port-434445 - found existing host DHCP lease matching {name: "default-k8s-diff-port-434445", mac: "52:54:00:78:ea:d5", ip: "192.168.50.236"}
	I0116 03:44:09.199794  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Reserved static IP address: 192.168.50.236
	I0116 03:44:09.199808  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Waiting for SSH to be available...
	I0116 03:44:09.199832  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | Getting to WaitForSSH function...
	I0116 03:44:09.202093  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:09.202494  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ea:d5", ip: ""} in network mk-default-k8s-diff-port-434445: {Iface:virbr1 ExpiryTime:2024-01-16 04:44:01 +0000 UTC Type:0 Mac:52:54:00:78:ea:d5 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:default-k8s-diff-port-434445 Clientid:01:52:54:00:78:ea:d5}
	I0116 03:44:09.202529  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined IP address 192.168.50.236 and MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:09.202664  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | Using SSH client type: external
	I0116 03:44:09.202690  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | Using SSH private key: /home/jenkins/minikube-integration/17965-468241/.minikube/machines/default-k8s-diff-port-434445/id_rsa (-rw-------)
	I0116 03:44:09.202723  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.236 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17965-468241/.minikube/machines/default-k8s-diff-port-434445/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0116 03:44:09.202746  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | About to run SSH command:
	I0116 03:44:09.202763  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | exit 0
	I0116 03:44:09.302425  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | SSH cmd err, output: <nil>: 
	I0116 03:44:09.302867  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetConfigRaw
	I0116 03:44:09.303666  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetIP
	I0116 03:44:09.306482  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:09.306884  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ea:d5", ip: ""} in network mk-default-k8s-diff-port-434445: {Iface:virbr1 ExpiryTime:2024-01-16 04:44:01 +0000 UTC Type:0 Mac:52:54:00:78:ea:d5 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:default-k8s-diff-port-434445 Clientid:01:52:54:00:78:ea:d5}
	I0116 03:44:09.306920  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined IP address 192.168.50.236 and MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:09.307189  507889 profile.go:148] Saving config to /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/default-k8s-diff-port-434445/config.json ...
	I0116 03:44:09.307418  507889 machine.go:88] provisioning docker machine ...
	I0116 03:44:09.307437  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .DriverName
	I0116 03:44:09.307673  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetMachineName
	I0116 03:44:09.307865  507889 buildroot.go:166] provisioning hostname "default-k8s-diff-port-434445"
	I0116 03:44:09.307886  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetMachineName
	I0116 03:44:09.308073  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHHostname
	I0116 03:44:09.310375  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:09.310726  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ea:d5", ip: ""} in network mk-default-k8s-diff-port-434445: {Iface:virbr1 ExpiryTime:2024-01-16 04:44:01 +0000 UTC Type:0 Mac:52:54:00:78:ea:d5 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:default-k8s-diff-port-434445 Clientid:01:52:54:00:78:ea:d5}
	I0116 03:44:09.310765  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined IP address 192.168.50.236 and MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:09.310920  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHPort
	I0116 03:44:09.311111  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHKeyPath
	I0116 03:44:09.311231  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHKeyPath
	I0116 03:44:09.311384  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHUsername
	I0116 03:44:09.311528  507889 main.go:141] libmachine: Using SSH client type: native
	I0116 03:44:09.311932  507889 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.50.236 22 <nil> <nil>}
	I0116 03:44:09.311949  507889 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-434445 && echo "default-k8s-diff-port-434445" | sudo tee /etc/hostname
	I0116 03:44:09.469340  507889 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-434445
	
	I0116 03:44:09.469384  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHHostname
	I0116 03:44:09.472788  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:09.473108  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ea:d5", ip: ""} in network mk-default-k8s-diff-port-434445: {Iface:virbr1 ExpiryTime:2024-01-16 04:44:01 +0000 UTC Type:0 Mac:52:54:00:78:ea:d5 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:default-k8s-diff-port-434445 Clientid:01:52:54:00:78:ea:d5}
	I0116 03:44:09.473166  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined IP address 192.168.50.236 and MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:09.473353  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHPort
	I0116 03:44:09.473571  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHKeyPath
	I0116 03:44:09.473768  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHKeyPath
	I0116 03:44:09.473963  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHUsername
	I0116 03:44:09.474171  507889 main.go:141] libmachine: Using SSH client type: native
	I0116 03:44:09.474626  507889 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.50.236 22 <nil> <nil>}
	I0116 03:44:09.474657  507889 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-434445' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-434445/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-434445' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 03:44:09.622177  507889 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 03:44:09.622223  507889 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17965-468241/.minikube CaCertPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17965-468241/.minikube}
	I0116 03:44:09.622253  507889 buildroot.go:174] setting up certificates
	I0116 03:44:09.622267  507889 provision.go:83] configureAuth start
	I0116 03:44:09.622280  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetMachineName
	I0116 03:44:09.622649  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetIP
	I0116 03:44:09.625970  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:09.626394  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ea:d5", ip: ""} in network mk-default-k8s-diff-port-434445: {Iface:virbr1 ExpiryTime:2024-01-16 04:44:01 +0000 UTC Type:0 Mac:52:54:00:78:ea:d5 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:default-k8s-diff-port-434445 Clientid:01:52:54:00:78:ea:d5}
	I0116 03:44:09.626429  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined IP address 192.168.50.236 and MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:09.626603  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHHostname
	I0116 03:44:09.629623  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:09.630022  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ea:d5", ip: ""} in network mk-default-k8s-diff-port-434445: {Iface:virbr1 ExpiryTime:2024-01-16 04:44:01 +0000 UTC Type:0 Mac:52:54:00:78:ea:d5 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:default-k8s-diff-port-434445 Clientid:01:52:54:00:78:ea:d5}
	I0116 03:44:09.630052  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined IP address 192.168.50.236 and MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:09.630263  507889 provision.go:138] copyHostCerts
	I0116 03:44:09.630354  507889 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem, removing ...
	I0116 03:44:09.630370  507889 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem
	I0116 03:44:09.630449  507889 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem (1078 bytes)
	I0116 03:44:09.630603  507889 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem, removing ...
	I0116 03:44:09.630626  507889 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem
	I0116 03:44:09.630661  507889 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem (1123 bytes)
	I0116 03:44:09.630760  507889 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem, removing ...
	I0116 03:44:09.630775  507889 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem
	I0116 03:44:09.630805  507889 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem (1679 bytes)
	I0116 03:44:09.630891  507889 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-434445 san=[192.168.50.236 192.168.50.236 localhost 127.0.0.1 minikube default-k8s-diff-port-434445]
	I0116 03:44:10.127058  507889 provision.go:172] copyRemoteCerts
	I0116 03:44:10.127138  507889 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 03:44:10.127175  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHHostname
	I0116 03:44:10.130572  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:10.131095  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ea:d5", ip: ""} in network mk-default-k8s-diff-port-434445: {Iface:virbr1 ExpiryTime:2024-01-16 04:44:01 +0000 UTC Type:0 Mac:52:54:00:78:ea:d5 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:default-k8s-diff-port-434445 Clientid:01:52:54:00:78:ea:d5}
	I0116 03:44:10.131133  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined IP address 192.168.50.236 and MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:10.131313  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHPort
	I0116 03:44:10.131590  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHKeyPath
	I0116 03:44:10.131825  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHUsername
	I0116 03:44:10.132001  507889 sshutil.go:53] new ssh client: &{IP:192.168.50.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/default-k8s-diff-port-434445/id_rsa Username:docker}
	I0116 03:44:10.238263  507889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0116 03:44:10.269567  507889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0116 03:44:10.295065  507889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0116 03:44:10.323347  507889 provision.go:86] duration metric: configureAuth took 701.062063ms
	I0116 03:44:10.323391  507889 buildroot.go:189] setting minikube options for container-runtime
	I0116 03:44:10.323667  507889 config.go:182] Loaded profile config "default-k8s-diff-port-434445": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 03:44:10.323774  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHHostname
	I0116 03:44:10.326825  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:10.327222  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ea:d5", ip: ""} in network mk-default-k8s-diff-port-434445: {Iface:virbr1 ExpiryTime:2024-01-16 04:44:01 +0000 UTC Type:0 Mac:52:54:00:78:ea:d5 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:default-k8s-diff-port-434445 Clientid:01:52:54:00:78:ea:d5}
	I0116 03:44:10.327266  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined IP address 192.168.50.236 and MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:10.327423  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHPort
	I0116 03:44:10.327682  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHKeyPath
	I0116 03:44:10.327883  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHKeyPath
	I0116 03:44:10.328077  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHUsername
	I0116 03:44:10.328269  507889 main.go:141] libmachine: Using SSH client type: native
	I0116 03:44:10.328743  507889 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.50.236 22 <nil> <nil>}
	I0116 03:44:10.328778  507889 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 03:44:10.700188  507889 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 03:44:10.700221  507889 machine.go:91] provisioned docker machine in 1.392790129s
	I0116 03:44:10.700232  507889 start.go:300] post-start starting for "default-k8s-diff-port-434445" (driver="kvm2")
	I0116 03:44:10.700244  507889 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 03:44:10.700261  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .DriverName
	I0116 03:44:10.700745  507889 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 03:44:10.700786  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHHostname
	I0116 03:44:10.704466  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:10.705001  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ea:d5", ip: ""} in network mk-default-k8s-diff-port-434445: {Iface:virbr1 ExpiryTime:2024-01-16 04:44:01 +0000 UTC Type:0 Mac:52:54:00:78:ea:d5 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:default-k8s-diff-port-434445 Clientid:01:52:54:00:78:ea:d5}
	I0116 03:44:10.705045  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined IP address 192.168.50.236 and MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:10.705278  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHPort
	I0116 03:44:10.705503  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHKeyPath
	I0116 03:44:10.705735  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHUsername
	I0116 03:44:10.705912  507889 sshutil.go:53] new ssh client: &{IP:192.168.50.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/default-k8s-diff-port-434445/id_rsa Username:docker}
	I0116 03:44:10.807625  507889 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 03:44:10.813392  507889 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 03:44:10.813428  507889 filesync.go:126] Scanning /home/jenkins/minikube-integration/17965-468241/.minikube/addons for local assets ...
	I0116 03:44:10.813519  507889 filesync.go:126] Scanning /home/jenkins/minikube-integration/17965-468241/.minikube/files for local assets ...
	I0116 03:44:10.813596  507889 filesync.go:149] local asset: /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem -> 4754782.pem in /etc/ssl/certs
	I0116 03:44:10.813687  507889 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 03:44:10.824028  507889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem --> /etc/ssl/certs/4754782.pem (1708 bytes)
	I0116 03:44:10.853453  507889 start.go:303] post-start completed in 153.201453ms
	I0116 03:44:10.853493  507889 fix.go:56] fixHost completed within 22.144172966s
	I0116 03:44:10.853543  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHHostname
	I0116 03:44:10.856529  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:10.856907  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ea:d5", ip: ""} in network mk-default-k8s-diff-port-434445: {Iface:virbr1 ExpiryTime:2024-01-16 04:44:01 +0000 UTC Type:0 Mac:52:54:00:78:ea:d5 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:default-k8s-diff-port-434445 Clientid:01:52:54:00:78:ea:d5}
	I0116 03:44:10.856967  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined IP address 192.168.50.236 and MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:10.857185  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHPort
	I0116 03:44:10.857438  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHKeyPath
	I0116 03:44:10.857636  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHKeyPath
	I0116 03:44:10.857790  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHUsername
	I0116 03:44:10.857974  507889 main.go:141] libmachine: Using SSH client type: native
	I0116 03:44:10.858502  507889 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.50.236 22 <nil> <nil>}
	I0116 03:44:10.858528  507889 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0116 03:44:10.997398  507889 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705376650.933903671
	
	I0116 03:44:10.997426  507889 fix.go:206] guest clock: 1705376650.933903671
	I0116 03:44:10.997436  507889 fix.go:219] Guest: 2024-01-16 03:44:10.933903671 +0000 UTC Remote: 2024-01-16 03:44:10.853498317 +0000 UTC m=+234.302480786 (delta=80.405354ms)
	I0116 03:44:10.997464  507889 fix.go:190] guest clock delta is within tolerance: 80.405354ms
	I0116 03:44:10.997471  507889 start.go:83] releasing machines lock for "default-k8s-diff-port-434445", held for 22.288188395s
	I0116 03:44:10.997517  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .DriverName
	I0116 03:44:10.997857  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetIP
	I0116 03:44:11.001410  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:11.001814  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ea:d5", ip: ""} in network mk-default-k8s-diff-port-434445: {Iface:virbr1 ExpiryTime:2024-01-16 04:44:01 +0000 UTC Type:0 Mac:52:54:00:78:ea:d5 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:default-k8s-diff-port-434445 Clientid:01:52:54:00:78:ea:d5}
	I0116 03:44:11.001864  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined IP address 192.168.50.236 and MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:11.002016  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .DriverName
	I0116 03:44:11.002649  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .DriverName
	I0116 03:44:11.002923  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .DriverName
	I0116 03:44:11.003015  507889 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 03:44:11.003068  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHHostname
	I0116 03:44:11.003258  507889 ssh_runner.go:195] Run: cat /version.json
	I0116 03:44:11.003294  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHHostname
	I0116 03:44:11.006383  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:11.006699  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:11.006800  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ea:d5", ip: ""} in network mk-default-k8s-diff-port-434445: {Iface:virbr1 ExpiryTime:2024-01-16 04:44:01 +0000 UTC Type:0 Mac:52:54:00:78:ea:d5 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:default-k8s-diff-port-434445 Clientid:01:52:54:00:78:ea:d5}
	I0116 03:44:11.006850  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined IP address 192.168.50.236 and MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:11.007123  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHPort
	I0116 03:44:11.007230  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ea:d5", ip: ""} in network mk-default-k8s-diff-port-434445: {Iface:virbr1 ExpiryTime:2024-01-16 04:44:01 +0000 UTC Type:0 Mac:52:54:00:78:ea:d5 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:default-k8s-diff-port-434445 Clientid:01:52:54:00:78:ea:d5}
	I0116 03:44:11.007330  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined IP address 192.168.50.236 and MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:11.007353  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHKeyPath
	I0116 03:44:11.007378  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHPort
	I0116 03:44:11.007585  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHUsername
	I0116 03:44:11.007597  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHKeyPath
	I0116 03:44:11.007737  507889 sshutil.go:53] new ssh client: &{IP:192.168.50.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/default-k8s-diff-port-434445/id_rsa Username:docker}
	I0116 03:44:11.007795  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHUsername
	I0116 03:44:11.007980  507889 sshutil.go:53] new ssh client: &{IP:192.168.50.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/default-k8s-diff-port-434445/id_rsa Username:docker}
	I0116 03:44:11.139882  507889 ssh_runner.go:195] Run: systemctl --version
	I0116 03:44:11.147082  507889 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 03:44:11.317582  507889 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0116 03:44:11.324567  507889 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 03:44:11.324656  507889 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 03:44:11.348193  507889 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0116 03:44:11.348225  507889 start.go:475] detecting cgroup driver to use...
	I0116 03:44:11.348319  507889 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 03:44:11.367049  507889 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 03:44:11.386632  507889 docker.go:217] disabling cri-docker service (if available) ...
	I0116 03:44:11.386713  507889 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 03:44:11.409551  507889 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 03:44:11.424599  507889 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 03:44:11.586480  507889 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 03:44:11.733770  507889 docker.go:233] disabling docker service ...
	I0116 03:44:11.733855  507889 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 03:44:11.751184  507889 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 03:44:11.766970  507889 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 03:44:11.903645  507889 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 03:44:12.017100  507889 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 03:44:12.031725  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 03:44:12.052091  507889 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0116 03:44:12.052179  507889 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:44:12.063115  507889 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 03:44:12.063219  507889 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:44:12.073109  507889 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:44:12.083438  507889 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:44:12.095783  507889 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 03:44:12.107816  507889 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 03:44:12.117997  507889 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0116 03:44:12.118077  507889 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0116 03:44:12.132997  507889 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 03:44:12.145200  507889 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 03:44:12.266786  507889 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 03:44:12.460779  507889 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 03:44:12.460892  507889 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 03:44:12.469200  507889 start.go:543] Will wait 60s for crictl version
	I0116 03:44:12.469305  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:44:12.473761  507889 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 03:44:12.536262  507889 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0116 03:44:12.536382  507889 ssh_runner.go:195] Run: crio --version
	I0116 03:44:12.593212  507889 ssh_runner.go:195] Run: crio --version
	I0116 03:44:12.650197  507889 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0116 03:44:09.577389  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:44:10.077774  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:44:10.578076  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:44:10.613091  507510 api_server.go:72] duration metric: took 2.036140794s to wait for apiserver process to appear ...
	I0116 03:44:10.613124  507510 api_server.go:88] waiting for apiserver healthz status ...
	I0116 03:44:10.613148  507510 api_server.go:253] Checking apiserver healthz at https://192.168.61.167:8443/healthz ...
	I0116 03:44:11.706731  507339 node_ready.go:58] node "no-preload-666547" has status "Ready":"False"
	I0116 03:44:13.713926  507339 node_ready.go:49] node "no-preload-666547" has status "Ready":"True"
	I0116 03:44:13.713958  507339 node_ready.go:38] duration metric: took 6.512893933s waiting for node "no-preload-666547" to be "Ready" ...
	I0116 03:44:13.713972  507339 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:44:13.727930  507339 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-lr95b" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:14.740352  507339 pod_ready.go:92] pod "coredns-76f75df574-lr95b" in "kube-system" namespace has status "Ready":"True"
	I0116 03:44:14.740392  507339 pod_ready.go:81] duration metric: took 1.012371035s waiting for pod "coredns-76f75df574-lr95b" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:14.740408  507339 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-666547" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:11.442223  507257 main.go:141] libmachine: (embed-certs-615980) Waiting to get IP...
	I0116 03:44:11.443346  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:11.443787  507257 main.go:141] libmachine: (embed-certs-615980) DBG | unable to find current IP address of domain embed-certs-615980 in network mk-embed-certs-615980
	I0116 03:44:11.443851  507257 main.go:141] libmachine: (embed-certs-615980) DBG | I0116 03:44:11.443761  508731 retry.go:31] will retry after 306.7144ms: waiting for machine to come up
	I0116 03:44:11.752574  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:11.753186  507257 main.go:141] libmachine: (embed-certs-615980) DBG | unable to find current IP address of domain embed-certs-615980 in network mk-embed-certs-615980
	I0116 03:44:11.753217  507257 main.go:141] libmachine: (embed-certs-615980) DBG | I0116 03:44:11.753126  508731 retry.go:31] will retry after 270.011585ms: waiting for machine to come up
	I0116 03:44:12.024942  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:12.025507  507257 main.go:141] libmachine: (embed-certs-615980) DBG | unable to find current IP address of domain embed-certs-615980 in network mk-embed-certs-615980
	I0116 03:44:12.025548  507257 main.go:141] libmachine: (embed-certs-615980) DBG | I0116 03:44:12.025433  508731 retry.go:31] will retry after 328.680313ms: waiting for machine to come up
	I0116 03:44:12.355989  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:12.356557  507257 main.go:141] libmachine: (embed-certs-615980) DBG | unable to find current IP address of domain embed-certs-615980 in network mk-embed-certs-615980
	I0116 03:44:12.356582  507257 main.go:141] libmachine: (embed-certs-615980) DBG | I0116 03:44:12.356493  508731 retry.go:31] will retry after 598.194786ms: waiting for machine to come up
	I0116 03:44:12.956170  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:12.956754  507257 main.go:141] libmachine: (embed-certs-615980) DBG | unable to find current IP address of domain embed-certs-615980 in network mk-embed-certs-615980
	I0116 03:44:12.956782  507257 main.go:141] libmachine: (embed-certs-615980) DBG | I0116 03:44:12.956673  508731 retry.go:31] will retry after 713.891978ms: waiting for machine to come up
	I0116 03:44:13.672728  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:13.673741  507257 main.go:141] libmachine: (embed-certs-615980) DBG | unable to find current IP address of domain embed-certs-615980 in network mk-embed-certs-615980
	I0116 03:44:13.673772  507257 main.go:141] libmachine: (embed-certs-615980) DBG | I0116 03:44:13.673636  508731 retry.go:31] will retry after 789.579297ms: waiting for machine to come up
	I0116 03:44:14.464913  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:14.465532  507257 main.go:141] libmachine: (embed-certs-615980) DBG | unable to find current IP address of domain embed-certs-615980 in network mk-embed-certs-615980
	I0116 03:44:14.465567  507257 main.go:141] libmachine: (embed-certs-615980) DBG | I0116 03:44:14.465446  508731 retry.go:31] will retry after 744.319122ms: waiting for machine to come up
	I0116 03:44:15.211748  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:15.212356  507257 main.go:141] libmachine: (embed-certs-615980) DBG | unable to find current IP address of domain embed-certs-615980 in network mk-embed-certs-615980
	I0116 03:44:15.212389  507257 main.go:141] libmachine: (embed-certs-615980) DBG | I0116 03:44:15.212282  508731 retry.go:31] will retry after 1.231175582s: waiting for machine to come up
	I0116 03:44:12.652092  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetIP
	I0116 03:44:12.655815  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:12.656308  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ea:d5", ip: ""} in network mk-default-k8s-diff-port-434445: {Iface:virbr1 ExpiryTime:2024-01-16 04:44:01 +0000 UTC Type:0 Mac:52:54:00:78:ea:d5 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:default-k8s-diff-port-434445 Clientid:01:52:54:00:78:ea:d5}
	I0116 03:44:12.656383  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined IP address 192.168.50.236 and MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:12.656790  507889 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0116 03:44:12.661880  507889 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 03:44:12.677695  507889 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 03:44:12.677794  507889 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 03:44:12.731676  507889 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0116 03:44:12.731794  507889 ssh_runner.go:195] Run: which lz4
	I0116 03:44:12.736614  507889 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0116 03:44:12.741554  507889 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0116 03:44:12.741595  507889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0116 03:44:15.047223  507889 crio.go:444] Took 2.310653 seconds to copy over tarball
	I0116 03:44:15.047386  507889 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0116 03:44:15.614559  507510 api_server.go:269] stopped: https://192.168.61.167:8443/healthz: Get "https://192.168.61.167:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0116 03:44:15.614617  507510 api_server.go:253] Checking apiserver healthz at https://192.168.61.167:8443/healthz ...
	I0116 03:44:16.992197  507510 api_server.go:279] https://192.168.61.167:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 03:44:16.992236  507510 api_server.go:103] status: https://192.168.61.167:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 03:44:16.992255  507510 api_server.go:253] Checking apiserver healthz at https://192.168.61.167:8443/healthz ...
	I0116 03:44:17.098327  507510 api_server.go:279] https://192.168.61.167:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 03:44:17.098365  507510 api_server.go:103] status: https://192.168.61.167:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 03:44:17.113518  507510 api_server.go:253] Checking apiserver healthz at https://192.168.61.167:8443/healthz ...
	I0116 03:44:17.133276  507510 api_server.go:279] https://192.168.61.167:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W0116 03:44:17.133308  507510 api_server.go:103] status: https://192.168.61.167:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I0116 03:44:17.613843  507510 api_server.go:253] Checking apiserver healthz at https://192.168.61.167:8443/healthz ...
	I0116 03:44:17.621074  507510 api_server.go:279] https://192.168.61.167:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0116 03:44:17.621131  507510 api_server.go:103] status: https://192.168.61.167:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0116 03:44:18.113648  507510 api_server.go:253] Checking apiserver healthz at https://192.168.61.167:8443/healthz ...
	I0116 03:44:18.936452  507510 api_server.go:279] https://192.168.61.167:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0116 03:44:18.936492  507510 api_server.go:103] status: https://192.168.61.167:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0116 03:44:18.936521  507510 api_server.go:253] Checking apiserver healthz at https://192.168.61.167:8443/healthz ...
	I0116 03:44:19.466220  507510 api_server.go:279] https://192.168.61.167:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0116 03:44:19.466259  507510 api_server.go:103] status: https://192.168.61.167:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0116 03:44:19.466278  507510 api_server.go:253] Checking apiserver healthz at https://192.168.61.167:8443/healthz ...
	I0116 03:44:16.750170  507339 pod_ready.go:102] pod "etcd-no-preload-666547" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:19.438168  507339 pod_ready.go:92] pod "etcd-no-preload-666547" in "kube-system" namespace has status "Ready":"True"
	I0116 03:44:19.438207  507339 pod_ready.go:81] duration metric: took 4.697789344s waiting for pod "etcd-no-preload-666547" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:19.438224  507339 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-666547" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:19.445845  507339 pod_ready.go:92] pod "kube-apiserver-no-preload-666547" in "kube-system" namespace has status "Ready":"True"
	I0116 03:44:19.445875  507339 pod_ready.go:81] duration metric: took 7.641191ms waiting for pod "kube-apiserver-no-preload-666547" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:19.445889  507339 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-666547" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:19.452468  507339 pod_ready.go:92] pod "kube-controller-manager-no-preload-666547" in "kube-system" namespace has status "Ready":"True"
	I0116 03:44:19.452491  507339 pod_ready.go:81] duration metric: took 6.593311ms waiting for pod "kube-controller-manager-no-preload-666547" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:19.452500  507339 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dcmrn" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:19.459542  507339 pod_ready.go:92] pod "kube-proxy-dcmrn" in "kube-system" namespace has status "Ready":"True"
	I0116 03:44:19.459576  507339 pod_ready.go:81] duration metric: took 7.067817ms waiting for pod "kube-proxy-dcmrn" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:19.459591  507339 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-666547" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:19.966827  507339 pod_ready.go:92] pod "kube-scheduler-no-preload-666547" in "kube-system" namespace has status "Ready":"True"
	I0116 03:44:19.966867  507339 pod_ready.go:81] duration metric: took 507.26823ms waiting for pod "kube-scheduler-no-preload-666547" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:19.966878  507339 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:19.946145  507510 api_server.go:279] https://192.168.61.167:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0116 03:44:19.946209  507510 api_server.go:103] status: https://192.168.61.167:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0116 03:44:19.946230  507510 api_server.go:253] Checking apiserver healthz at https://192.168.61.167:8443/healthz ...
	I0116 03:44:20.259035  507510 api_server.go:279] https://192.168.61.167:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0116 03:44:20.259091  507510 api_server.go:103] status: https://192.168.61.167:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0116 03:44:20.259142  507510 api_server.go:253] Checking apiserver healthz at https://192.168.61.167:8443/healthz ...
	I0116 03:44:20.330196  507510 api_server.go:279] https://192.168.61.167:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0116 03:44:20.330237  507510 api_server.go:103] status: https://192.168.61.167:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0116 03:44:20.613624  507510 api_server.go:253] Checking apiserver healthz at https://192.168.61.167:8443/healthz ...
	I0116 03:44:20.621956  507510 api_server.go:279] https://192.168.61.167:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0116 03:44:20.622008  507510 api_server.go:103] status: https://192.168.61.167:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0116 03:44:21.113536  507510 api_server.go:253] Checking apiserver healthz at https://192.168.61.167:8443/healthz ...
	I0116 03:44:21.125326  507510 api_server.go:279] https://192.168.61.167:8443/healthz returned 200:
	ok
	I0116 03:44:21.137555  507510 api_server.go:141] control plane version: v1.16.0
	I0116 03:44:21.137602  507510 api_server.go:131] duration metric: took 10.524468396s to wait for apiserver health ...
	I0116 03:44:21.137616  507510 cni.go:84] Creating CNI manager for ""
	I0116 03:44:21.137625  507510 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:44:21.139682  507510 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 03:44:16.445685  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:16.446216  507257 main.go:141] libmachine: (embed-certs-615980) DBG | unable to find current IP address of domain embed-certs-615980 in network mk-embed-certs-615980
	I0116 03:44:16.446246  507257 main.go:141] libmachine: (embed-certs-615980) DBG | I0116 03:44:16.446137  508731 retry.go:31] will retry after 1.400972s: waiting for machine to come up
	I0116 03:44:17.848447  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:17.848964  507257 main.go:141] libmachine: (embed-certs-615980) DBG | unable to find current IP address of domain embed-certs-615980 in network mk-embed-certs-615980
	I0116 03:44:17.848991  507257 main.go:141] libmachine: (embed-certs-615980) DBG | I0116 03:44:17.848916  508731 retry.go:31] will retry after 2.293115324s: waiting for machine to come up
	I0116 03:44:20.145242  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:20.145899  507257 main.go:141] libmachine: (embed-certs-615980) DBG | unable to find current IP address of domain embed-certs-615980 in network mk-embed-certs-615980
	I0116 03:44:20.145933  507257 main.go:141] libmachine: (embed-certs-615980) DBG | I0116 03:44:20.145842  508731 retry.go:31] will retry after 2.158183619s: waiting for machine to come up
	I0116 03:44:18.744370  507889 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.696918616s)
	I0116 03:44:18.744426  507889 crio.go:451] Took 3.697118 seconds to extract the tarball
	I0116 03:44:18.744440  507889 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0116 03:44:18.792685  507889 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 03:44:18.868262  507889 crio.go:496] all images are preloaded for cri-o runtime.
	I0116 03:44:18.868291  507889 cache_images.go:84] Images are preloaded, skipping loading
	I0116 03:44:18.868382  507889 ssh_runner.go:195] Run: crio config
	I0116 03:44:18.954026  507889 cni.go:84] Creating CNI manager for ""
	I0116 03:44:18.954060  507889 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:44:18.954085  507889 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 03:44:18.954138  507889 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.236 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-434445 NodeName:default-k8s-diff-port-434445 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.236"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.236 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0116 03:44:18.954362  507889 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.236
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-434445"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.236
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.236"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 03:44:18.954483  507889 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-434445 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.236
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-434445 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0116 03:44:18.954557  507889 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0116 03:44:18.966046  507889 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 03:44:18.966143  507889 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0116 03:44:18.977441  507889 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I0116 03:44:18.997304  507889 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0116 03:44:19.016597  507889 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I0116 03:44:19.035635  507889 ssh_runner.go:195] Run: grep 192.168.50.236	control-plane.minikube.internal$ /etc/hosts
	I0116 03:44:19.039882  507889 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.236	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 03:44:19.053342  507889 certs.go:56] Setting up /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/default-k8s-diff-port-434445 for IP: 192.168.50.236
	I0116 03:44:19.053383  507889 certs.go:190] acquiring lock for shared ca certs: {Name:mkab5aeea3e0ed69f3592dc8f418e07c6075b130 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:44:19.053580  507889 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17965-468241/.minikube/ca.key
	I0116 03:44:19.053655  507889 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.key
	I0116 03:44:19.053773  507889 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/default-k8s-diff-port-434445/client.key
	I0116 03:44:19.053920  507889 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/default-k8s-diff-port-434445/apiserver.key.4e4dee8d
	I0116 03:44:19.053994  507889 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/default-k8s-diff-port-434445/proxy-client.key
	I0116 03:44:19.054154  507889 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478.pem (1338 bytes)
	W0116 03:44:19.054198  507889 certs.go:433] ignoring /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478_empty.pem, impossibly tiny 0 bytes
	I0116 03:44:19.054215  507889 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca-key.pem (1679 bytes)
	I0116 03:44:19.054249  507889 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem (1078 bytes)
	I0116 03:44:19.054286  507889 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem (1123 bytes)
	I0116 03:44:19.054318  507889 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem (1679 bytes)
	I0116 03:44:19.054373  507889 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem (1708 bytes)
	I0116 03:44:19.055259  507889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/default-k8s-diff-port-434445/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0116 03:44:19.086636  507889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/default-k8s-diff-port-434445/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0116 03:44:19.117759  507889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/default-k8s-diff-port-434445/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0116 03:44:19.144530  507889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/default-k8s-diff-port-434445/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0116 03:44:19.170423  507889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 03:44:19.198224  507889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0116 03:44:19.223514  507889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 03:44:19.250858  507889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 03:44:19.276922  507889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem --> /usr/share/ca-certificates/4754782.pem (1708 bytes)
	I0116 03:44:19.302621  507889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 03:44:19.330021  507889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478.pem --> /usr/share/ca-certificates/475478.pem (1338 bytes)
	I0116 03:44:19.358108  507889 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0116 03:44:19.379126  507889 ssh_runner.go:195] Run: openssl version
	I0116 03:44:19.386675  507889 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 03:44:19.398759  507889 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:44:19.404201  507889 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 02:35 /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:44:19.404283  507889 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:44:19.411067  507889 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 03:44:19.422608  507889 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/475478.pem && ln -fs /usr/share/ca-certificates/475478.pem /etc/ssl/certs/475478.pem"
	I0116 03:44:19.434422  507889 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/475478.pem
	I0116 03:44:19.440018  507889 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 02:44 /usr/share/ca-certificates/475478.pem
	I0116 03:44:19.440103  507889 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/475478.pem
	I0116 03:44:19.446469  507889 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/475478.pem /etc/ssl/certs/51391683.0"
	I0116 03:44:19.460130  507889 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4754782.pem && ln -fs /usr/share/ca-certificates/4754782.pem /etc/ssl/certs/4754782.pem"
	I0116 03:44:19.473886  507889 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4754782.pem
	I0116 03:44:19.478781  507889 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 02:44 /usr/share/ca-certificates/4754782.pem
	I0116 03:44:19.478858  507889 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4754782.pem
	I0116 03:44:19.484826  507889 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4754782.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 03:44:19.495710  507889 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 03:44:19.500842  507889 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0116 03:44:19.507646  507889 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0116 03:44:19.515247  507889 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0116 03:44:19.523964  507889 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0116 03:44:19.532379  507889 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0116 03:44:19.540067  507889 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0116 03:44:19.548614  507889 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-434445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-434445 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.50.236 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 03:44:19.548812  507889 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0116 03:44:19.548900  507889 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 03:44:19.595803  507889 cri.go:89] found id: ""
	I0116 03:44:19.595910  507889 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0116 03:44:19.610615  507889 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0116 03:44:19.610647  507889 kubeadm.go:636] restartCluster start
	I0116 03:44:19.610726  507889 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0116 03:44:19.624175  507889 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:19.625683  507889 kubeconfig.go:92] found "default-k8s-diff-port-434445" server: "https://192.168.50.236:8444"
	I0116 03:44:19.628685  507889 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0116 03:44:19.640309  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:19.640390  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:19.653938  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:20.141193  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:20.141285  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:20.154331  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:20.640562  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:20.640691  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:20.657774  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:21.141268  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:21.141371  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:21.158792  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:21.141315  507510 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 03:44:21.168450  507510 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 03:44:21.206907  507510 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 03:44:21.222998  507510 system_pods.go:59] 7 kube-system pods found
	I0116 03:44:21.223072  507510 system_pods.go:61] "coredns-5644d7b6d9-7q4wc" [003ba660-e3c5-4a98-be67-75e43dc32b37] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0116 03:44:21.223084  507510 system_pods.go:61] "etcd-old-k8s-version-696770" [b029f446-15b1-4720-af3a-b651b778fc0d] Running
	I0116 03:44:21.223094  507510 system_pods.go:61] "kube-apiserver-old-k8s-version-696770" [a9597e33-db8c-48e5-b119-d6d97d8d8e3f] Running
	I0116 03:44:21.223114  507510 system_pods.go:61] "kube-controller-manager-old-k8s-version-696770" [901fd518-04a1-4de0-baa2-08c7d57a587d] Running
	I0116 03:44:21.223123  507510 system_pods.go:61] "kube-proxy-9pfdj" [ac00ed93-abe8-4f53-8e63-fa63589fbf5c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0116 03:44:21.223134  507510 system_pods.go:61] "kube-scheduler-old-k8s-version-696770" [a8d74e76-6c22-4d82-b954-4025dff18279] Running
	I0116 03:44:21.223146  507510 system_pods.go:61] "storage-provisioner" [b04dacf9-8137-4f22-ae36-147d08fd9b60] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0116 03:44:21.223158  507510 system_pods.go:74] duration metric: took 16.220665ms to wait for pod list to return data ...
	I0116 03:44:21.223173  507510 node_conditions.go:102] verifying NodePressure condition ...
	I0116 03:44:21.228670  507510 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 03:44:21.228715  507510 node_conditions.go:123] node cpu capacity is 2
	I0116 03:44:21.228734  507510 node_conditions.go:105] duration metric: took 5.552228ms to run NodePressure ...
	I0116 03:44:21.228760  507510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:44:21.576565  507510 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0116 03:44:21.581017  507510 retry.go:31] will retry after 323.975879ms: kubelet not initialised
	I0116 03:44:21.914790  507510 retry.go:31] will retry after 258.393503ms: kubelet not initialised
	I0116 03:44:22.180592  507510 retry.go:31] will retry after 582.791922ms: kubelet not initialised
	I0116 03:44:22.769880  507510 retry.go:31] will retry after 961.779974ms: kubelet not initialised
	I0116 03:44:23.739015  507510 retry.go:31] will retry after 686.353156ms: kubelet not initialised
	I0116 03:44:24.431951  507510 retry.go:31] will retry after 2.073440094s: kubelet not initialised
	I0116 03:44:21.976301  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:23.977710  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:22.305212  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:22.305701  507257 main.go:141] libmachine: (embed-certs-615980) DBG | unable to find current IP address of domain embed-certs-615980 in network mk-embed-certs-615980
	I0116 03:44:22.305732  507257 main.go:141] libmachine: (embed-certs-615980) DBG | I0116 03:44:22.305662  508731 retry.go:31] will retry after 3.080436267s: waiting for machine to come up
	I0116 03:44:25.389414  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:25.389850  507257 main.go:141] libmachine: (embed-certs-615980) DBG | unable to find current IP address of domain embed-certs-615980 in network mk-embed-certs-615980
	I0116 03:44:25.389875  507257 main.go:141] libmachine: (embed-certs-615980) DBG | I0116 03:44:25.389828  508731 retry.go:31] will retry after 2.730339967s: waiting for machine to come up
	I0116 03:44:21.640823  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:21.641083  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:21.656391  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:22.141134  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:22.141242  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:22.157848  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:22.641247  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:22.641371  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:22.654425  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:23.140719  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:23.140827  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:23.153823  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:23.641193  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:23.641298  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:23.654061  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:24.141196  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:24.141290  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:24.161415  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:24.640416  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:24.640514  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:24.670258  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:25.140571  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:25.140673  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:25.157823  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:25.641188  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:25.641284  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:25.655917  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:26.141241  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:26.141357  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:26.157447  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:26.511961  507510 retry.go:31] will retry after 4.006598367s: kubelet not initialised
	I0116 03:44:26.473653  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:28.474914  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:28.122340  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:28.122704  507257 main.go:141] libmachine: (embed-certs-615980) DBG | unable to find current IP address of domain embed-certs-615980 in network mk-embed-certs-615980
	I0116 03:44:28.122735  507257 main.go:141] libmachine: (embed-certs-615980) DBG | I0116 03:44:28.122676  508731 retry.go:31] will retry after 4.170800657s: waiting for machine to come up
	I0116 03:44:26.641408  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:26.641510  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:26.654505  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:27.141033  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:27.141129  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:27.154208  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:27.640701  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:27.640785  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:27.653964  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:28.141330  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:28.141406  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:28.153419  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:28.640986  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:28.641076  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:28.654357  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:29.141250  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:29.141335  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:29.154899  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:29.640619  507889 api_server.go:166] Checking apiserver status ...
	I0116 03:44:29.640717  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:29.654653  507889 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:29.654692  507889 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0116 03:44:29.654701  507889 kubeadm.go:1135] stopping kube-system containers ...
	I0116 03:44:29.654713  507889 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0116 03:44:29.654769  507889 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 03:44:29.697617  507889 cri.go:89] found id: ""
	I0116 03:44:29.697719  507889 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0116 03:44:29.719069  507889 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 03:44:29.735791  507889 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 03:44:29.735872  507889 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 03:44:29.748788  507889 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0116 03:44:29.748823  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:44:29.874894  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:44:30.787232  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:44:31.009234  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:44:31.136220  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:44:31.215330  507889 api_server.go:52] waiting for apiserver process to appear ...
	I0116 03:44:31.215416  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:44:30.526372  507510 retry.go:31] will retry after 4.363756335s: kubelet not initialised
	I0116 03:44:32.295936  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:32.296442  507257 main.go:141] libmachine: (embed-certs-615980) Found IP for machine: 192.168.72.159
	I0116 03:44:32.296483  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has current primary IP address 192.168.72.159 and MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:32.296499  507257 main.go:141] libmachine: (embed-certs-615980) Reserving static IP address...
	I0116 03:44:32.297078  507257 main.go:141] libmachine: (embed-certs-615980) DBG | found host DHCP lease matching {name: "embed-certs-615980", mac: "52:54:00:d4:a6:40", ip: "192.168.72.159"} in network mk-embed-certs-615980: {Iface:virbr4 ExpiryTime:2024-01-16 04:44:24 +0000 UTC Type:0 Mac:52:54:00:d4:a6:40 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-615980 Clientid:01:52:54:00:d4:a6:40}
	I0116 03:44:32.297121  507257 main.go:141] libmachine: (embed-certs-615980) Reserved static IP address: 192.168.72.159
	I0116 03:44:32.297140  507257 main.go:141] libmachine: (embed-certs-615980) DBG | skip adding static IP to network mk-embed-certs-615980 - found existing host DHCP lease matching {name: "embed-certs-615980", mac: "52:54:00:d4:a6:40", ip: "192.168.72.159"}
	I0116 03:44:32.297160  507257 main.go:141] libmachine: (embed-certs-615980) DBG | Getting to WaitForSSH function...
	I0116 03:44:32.297179  507257 main.go:141] libmachine: (embed-certs-615980) Waiting for SSH to be available...
	I0116 03:44:32.299440  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:32.299839  507257 main.go:141] libmachine: (embed-certs-615980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:a6:40", ip: ""} in network mk-embed-certs-615980: {Iface:virbr4 ExpiryTime:2024-01-16 04:44:24 +0000 UTC Type:0 Mac:52:54:00:d4:a6:40 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-615980 Clientid:01:52:54:00:d4:a6:40}
	I0116 03:44:32.299870  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined IP address 192.168.72.159 and MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:32.300064  507257 main.go:141] libmachine: (embed-certs-615980) DBG | Using SSH client type: external
	I0116 03:44:32.300098  507257 main.go:141] libmachine: (embed-certs-615980) DBG | Using SSH private key: /home/jenkins/minikube-integration/17965-468241/.minikube/machines/embed-certs-615980/id_rsa (-rw-------)
	I0116 03:44:32.300133  507257 main.go:141] libmachine: (embed-certs-615980) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.159 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17965-468241/.minikube/machines/embed-certs-615980/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0116 03:44:32.300153  507257 main.go:141] libmachine: (embed-certs-615980) DBG | About to run SSH command:
	I0116 03:44:32.300172  507257 main.go:141] libmachine: (embed-certs-615980) DBG | exit 0
	I0116 03:44:32.396718  507257 main.go:141] libmachine: (embed-certs-615980) DBG | SSH cmd err, output: <nil>: 
	I0116 03:44:32.397111  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetConfigRaw
	I0116 03:44:32.397901  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetIP
	I0116 03:44:32.400997  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:32.401502  507257 main.go:141] libmachine: (embed-certs-615980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:a6:40", ip: ""} in network mk-embed-certs-615980: {Iface:virbr4 ExpiryTime:2024-01-16 04:44:24 +0000 UTC Type:0 Mac:52:54:00:d4:a6:40 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-615980 Clientid:01:52:54:00:d4:a6:40}
	I0116 03:44:32.401540  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined IP address 192.168.72.159 and MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:32.402036  507257 profile.go:148] Saving config to /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/embed-certs-615980/config.json ...
	I0116 03:44:32.402259  507257 machine.go:88] provisioning docker machine ...
	I0116 03:44:32.402281  507257 main.go:141] libmachine: (embed-certs-615980) Calling .DriverName
	I0116 03:44:32.402539  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetMachineName
	I0116 03:44:32.402759  507257 buildroot.go:166] provisioning hostname "embed-certs-615980"
	I0116 03:44:32.402786  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetMachineName
	I0116 03:44:32.402966  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHHostname
	I0116 03:44:32.405935  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:32.406344  507257 main.go:141] libmachine: (embed-certs-615980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:a6:40", ip: ""} in network mk-embed-certs-615980: {Iface:virbr4 ExpiryTime:2024-01-16 04:44:24 +0000 UTC Type:0 Mac:52:54:00:d4:a6:40 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-615980 Clientid:01:52:54:00:d4:a6:40}
	I0116 03:44:32.406384  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined IP address 192.168.72.159 and MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:32.406585  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHPort
	I0116 03:44:32.406821  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHKeyPath
	I0116 03:44:32.407054  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHKeyPath
	I0116 03:44:32.407219  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHUsername
	I0116 03:44:32.407399  507257 main.go:141] libmachine: Using SSH client type: native
	I0116 03:44:32.407754  507257 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I0116 03:44:32.407768  507257 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-615980 && echo "embed-certs-615980" | sudo tee /etc/hostname
	I0116 03:44:32.561584  507257 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-615980
	
	I0116 03:44:32.561618  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHHostname
	I0116 03:44:32.564566  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:32.565004  507257 main.go:141] libmachine: (embed-certs-615980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:a6:40", ip: ""} in network mk-embed-certs-615980: {Iface:virbr4 ExpiryTime:2024-01-16 04:44:24 +0000 UTC Type:0 Mac:52:54:00:d4:a6:40 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-615980 Clientid:01:52:54:00:d4:a6:40}
	I0116 03:44:32.565033  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined IP address 192.168.72.159 and MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:32.565232  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHPort
	I0116 03:44:32.565481  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHKeyPath
	I0116 03:44:32.565672  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHKeyPath
	I0116 03:44:32.565843  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHUsername
	I0116 03:44:32.566045  507257 main.go:141] libmachine: Using SSH client type: native
	I0116 03:44:32.566521  507257 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I0116 03:44:32.566549  507257 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-615980' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-615980/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-615980' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 03:44:32.718945  507257 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 03:44:32.719005  507257 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17965-468241/.minikube CaCertPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17965-468241/.minikube}
	I0116 03:44:32.719037  507257 buildroot.go:174] setting up certificates
	I0116 03:44:32.719051  507257 provision.go:83] configureAuth start
	I0116 03:44:32.719081  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetMachineName
	I0116 03:44:32.719397  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetIP
	I0116 03:44:32.722474  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:32.722938  507257 main.go:141] libmachine: (embed-certs-615980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:a6:40", ip: ""} in network mk-embed-certs-615980: {Iface:virbr4 ExpiryTime:2024-01-16 04:44:24 +0000 UTC Type:0 Mac:52:54:00:d4:a6:40 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-615980 Clientid:01:52:54:00:d4:a6:40}
	I0116 03:44:32.722972  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined IP address 192.168.72.159 and MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:32.723136  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHHostname
	I0116 03:44:32.725821  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:32.726246  507257 main.go:141] libmachine: (embed-certs-615980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:a6:40", ip: ""} in network mk-embed-certs-615980: {Iface:virbr4 ExpiryTime:2024-01-16 04:44:24 +0000 UTC Type:0 Mac:52:54:00:d4:a6:40 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-615980 Clientid:01:52:54:00:d4:a6:40}
	I0116 03:44:32.726277  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined IP address 192.168.72.159 and MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:32.726448  507257 provision.go:138] copyHostCerts
	I0116 03:44:32.726535  507257 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem, removing ...
	I0116 03:44:32.726622  507257 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem
	I0116 03:44:32.726769  507257 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17965-468241/.minikube/ca.pem (1078 bytes)
	I0116 03:44:32.726971  507257 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem, removing ...
	I0116 03:44:32.726983  507257 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem
	I0116 03:44:32.727015  507257 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17965-468241/.minikube/cert.pem (1123 bytes)
	I0116 03:44:32.727099  507257 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem, removing ...
	I0116 03:44:32.727116  507257 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem
	I0116 03:44:32.727144  507257 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17965-468241/.minikube/key.pem (1679 bytes)
	I0116 03:44:32.727212  507257 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca-key.pem org=jenkins.embed-certs-615980 san=[192.168.72.159 192.168.72.159 localhost 127.0.0.1 minikube embed-certs-615980]
	I0116 03:44:32.921694  507257 provision.go:172] copyRemoteCerts
	I0116 03:44:32.921764  507257 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 03:44:32.921798  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHHostname
	I0116 03:44:32.924951  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:32.925329  507257 main.go:141] libmachine: (embed-certs-615980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:a6:40", ip: ""} in network mk-embed-certs-615980: {Iface:virbr4 ExpiryTime:2024-01-16 04:44:24 +0000 UTC Type:0 Mac:52:54:00:d4:a6:40 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-615980 Clientid:01:52:54:00:d4:a6:40}
	I0116 03:44:32.925362  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined IP address 192.168.72.159 and MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:32.925534  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHPort
	I0116 03:44:32.925855  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHKeyPath
	I0116 03:44:32.926135  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHUsername
	I0116 03:44:32.926390  507257 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/embed-certs-615980/id_rsa Username:docker}
	I0116 03:44:33.025856  507257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0116 03:44:33.055403  507257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0116 03:44:33.087908  507257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0116 03:44:33.116847  507257 provision.go:86] duration metric: configureAuth took 397.777297ms
	I0116 03:44:33.116886  507257 buildroot.go:189] setting minikube options for container-runtime
	I0116 03:44:33.117136  507257 config.go:182] Loaded profile config "embed-certs-615980": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 03:44:33.117267  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHHostname
	I0116 03:44:33.120452  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:33.120915  507257 main.go:141] libmachine: (embed-certs-615980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:a6:40", ip: ""} in network mk-embed-certs-615980: {Iface:virbr4 ExpiryTime:2024-01-16 04:44:24 +0000 UTC Type:0 Mac:52:54:00:d4:a6:40 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-615980 Clientid:01:52:54:00:d4:a6:40}
	I0116 03:44:33.120949  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined IP address 192.168.72.159 and MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:33.121189  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHPort
	I0116 03:44:33.121442  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHKeyPath
	I0116 03:44:33.121636  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHKeyPath
	I0116 03:44:33.121778  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHUsername
	I0116 03:44:33.121966  507257 main.go:141] libmachine: Using SSH client type: native
	I0116 03:44:33.122333  507257 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I0116 03:44:33.122359  507257 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 03:44:33.486009  507257 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 03:44:33.486147  507257 machine.go:91] provisioned docker machine in 1.083869863s
	I0116 03:44:33.486202  507257 start.go:300] post-start starting for "embed-certs-615980" (driver="kvm2")
	I0116 03:44:33.486239  507257 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 03:44:33.486282  507257 main.go:141] libmachine: (embed-certs-615980) Calling .DriverName
	I0116 03:44:33.486719  507257 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 03:44:33.486755  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHHostname
	I0116 03:44:33.490226  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:33.490676  507257 main.go:141] libmachine: (embed-certs-615980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:a6:40", ip: ""} in network mk-embed-certs-615980: {Iface:virbr4 ExpiryTime:2024-01-16 04:44:24 +0000 UTC Type:0 Mac:52:54:00:d4:a6:40 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-615980 Clientid:01:52:54:00:d4:a6:40}
	I0116 03:44:33.490743  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined IP address 192.168.72.159 and MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:33.490863  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHPort
	I0116 03:44:33.491117  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHKeyPath
	I0116 03:44:33.491299  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHUsername
	I0116 03:44:33.491478  507257 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/embed-certs-615980/id_rsa Username:docker}
	I0116 03:44:33.590039  507257 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 03:44:33.596095  507257 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 03:44:33.596124  507257 filesync.go:126] Scanning /home/jenkins/minikube-integration/17965-468241/.minikube/addons for local assets ...
	I0116 03:44:33.596206  507257 filesync.go:126] Scanning /home/jenkins/minikube-integration/17965-468241/.minikube/files for local assets ...
	I0116 03:44:33.596295  507257 filesync.go:149] local asset: /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem -> 4754782.pem in /etc/ssl/certs
	I0116 03:44:33.596437  507257 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 03:44:33.609260  507257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem --> /etc/ssl/certs/4754782.pem (1708 bytes)
	I0116 03:44:33.642578  507257 start.go:303] post-start completed in 156.336318ms
	I0116 03:44:33.642651  507257 fix.go:56] fixHost completed within 22.644969219s
	I0116 03:44:33.642703  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHHostname
	I0116 03:44:33.645616  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:33.645988  507257 main.go:141] libmachine: (embed-certs-615980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:a6:40", ip: ""} in network mk-embed-certs-615980: {Iface:virbr4 ExpiryTime:2024-01-16 04:44:24 +0000 UTC Type:0 Mac:52:54:00:d4:a6:40 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-615980 Clientid:01:52:54:00:d4:a6:40}
	I0116 03:44:33.646017  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined IP address 192.168.72.159 and MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:33.646277  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHPort
	I0116 03:44:33.646514  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHKeyPath
	I0116 03:44:33.646720  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHKeyPath
	I0116 03:44:33.646910  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHUsername
	I0116 03:44:33.647179  507257 main.go:141] libmachine: Using SSH client type: native
	I0116 03:44:33.647655  507257 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I0116 03:44:33.647682  507257 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0116 03:44:33.781805  507257 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705376673.706960834
	
	I0116 03:44:33.781839  507257 fix.go:206] guest clock: 1705376673.706960834
	I0116 03:44:33.781850  507257 fix.go:219] Guest: 2024-01-16 03:44:33.706960834 +0000 UTC Remote: 2024-01-16 03:44:33.642657737 +0000 UTC m=+367.429386706 (delta=64.303097ms)
	I0116 03:44:33.781879  507257 fix.go:190] guest clock delta is within tolerance: 64.303097ms
	I0116 03:44:33.781890  507257 start.go:83] releasing machines lock for "embed-certs-615980", held for 22.784266536s
	I0116 03:44:33.781917  507257 main.go:141] libmachine: (embed-certs-615980) Calling .DriverName
	I0116 03:44:33.782225  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetIP
	I0116 03:44:33.785113  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:33.785495  507257 main.go:141] libmachine: (embed-certs-615980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:a6:40", ip: ""} in network mk-embed-certs-615980: {Iface:virbr4 ExpiryTime:2024-01-16 04:44:24 +0000 UTC Type:0 Mac:52:54:00:d4:a6:40 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-615980 Clientid:01:52:54:00:d4:a6:40}
	I0116 03:44:33.785530  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined IP address 192.168.72.159 and MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:33.785718  507257 main.go:141] libmachine: (embed-certs-615980) Calling .DriverName
	I0116 03:44:33.786427  507257 main.go:141] libmachine: (embed-certs-615980) Calling .DriverName
	I0116 03:44:33.786655  507257 main.go:141] libmachine: (embed-certs-615980) Calling .DriverName
	I0116 03:44:33.786751  507257 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 03:44:33.786799  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHHostname
	I0116 03:44:33.786938  507257 ssh_runner.go:195] Run: cat /version.json
	I0116 03:44:33.786967  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHHostname
	I0116 03:44:33.790084  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:33.790288  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:33.790454  507257 main.go:141] libmachine: (embed-certs-615980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:a6:40", ip: ""} in network mk-embed-certs-615980: {Iface:virbr4 ExpiryTime:2024-01-16 04:44:24 +0000 UTC Type:0 Mac:52:54:00:d4:a6:40 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-615980 Clientid:01:52:54:00:d4:a6:40}
	I0116 03:44:33.790485  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined IP address 192.168.72.159 and MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:33.790655  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHPort
	I0116 03:44:33.790787  507257 main.go:141] libmachine: (embed-certs-615980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:a6:40", ip: ""} in network mk-embed-certs-615980: {Iface:virbr4 ExpiryTime:2024-01-16 04:44:24 +0000 UTC Type:0 Mac:52:54:00:d4:a6:40 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-615980 Clientid:01:52:54:00:d4:a6:40}
	I0116 03:44:33.790831  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined IP address 192.168.72.159 and MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:33.790899  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHKeyPath
	I0116 03:44:33.791007  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHPort
	I0116 03:44:33.791091  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHUsername
	I0116 03:44:33.791193  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHKeyPath
	I0116 03:44:33.791269  507257 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/embed-certs-615980/id_rsa Username:docker}
	I0116 03:44:33.791363  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHUsername
	I0116 03:44:33.791515  507257 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/embed-certs-615980/id_rsa Username:docker}
	I0116 03:44:33.907036  507257 ssh_runner.go:195] Run: systemctl --version
	I0116 03:44:33.913776  507257 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 03:44:34.062888  507257 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0116 03:44:34.070435  507257 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 03:44:34.070539  507257 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 03:44:34.091957  507257 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0116 03:44:34.091993  507257 start.go:475] detecting cgroup driver to use...
	I0116 03:44:34.092099  507257 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 03:44:34.108007  507257 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 03:44:34.123223  507257 docker.go:217] disabling cri-docker service (if available) ...
	I0116 03:44:34.123314  507257 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 03:44:34.141242  507257 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 03:44:34.157053  507257 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 03:44:34.274186  507257 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 03:44:34.427694  507257 docker.go:233] disabling docker service ...
	I0116 03:44:34.427785  507257 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 03:44:34.442789  507257 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 03:44:34.459761  507257 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 03:44:34.592453  507257 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 03:44:34.715991  507257 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 03:44:34.732175  507257 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 03:44:34.751885  507257 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0116 03:44:34.751989  507257 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:44:34.763769  507257 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 03:44:34.763853  507257 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:44:34.774444  507257 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:44:34.784975  507257 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:44:34.797634  507257 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 03:44:34.810962  507257 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 03:44:34.822224  507257 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0116 03:44:34.822314  507257 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0116 03:44:34.840500  507257 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 03:44:34.852285  507257 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 03:44:34.970828  507257 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 03:44:35.163097  507257 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 03:44:35.163242  507257 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 03:44:35.169041  507257 start.go:543] Will wait 60s for crictl version
	I0116 03:44:35.169150  507257 ssh_runner.go:195] Run: which crictl
	I0116 03:44:35.173367  507257 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 03:44:35.224951  507257 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0116 03:44:35.225043  507257 ssh_runner.go:195] Run: crio --version
	I0116 03:44:35.275230  507257 ssh_runner.go:195] Run: crio --version
	I0116 03:44:35.329852  507257 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0116 03:44:30.981714  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:33.476735  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:35.480715  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:35.331327  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetIP
	I0116 03:44:35.334148  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:35.334618  507257 main.go:141] libmachine: (embed-certs-615980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:a6:40", ip: ""} in network mk-embed-certs-615980: {Iface:virbr4 ExpiryTime:2024-01-16 04:44:24 +0000 UTC Type:0 Mac:52:54:00:d4:a6:40 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-615980 Clientid:01:52:54:00:d4:a6:40}
	I0116 03:44:35.334674  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined IP address 192.168.72.159 and MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:44:35.335166  507257 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0116 03:44:35.341389  507257 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 03:44:35.358757  507257 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 03:44:35.358866  507257 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 03:44:35.407869  507257 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0116 03:44:35.407983  507257 ssh_runner.go:195] Run: which lz4
	I0116 03:44:35.412533  507257 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0116 03:44:35.417266  507257 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0116 03:44:35.417303  507257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0116 03:44:31.715897  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:44:32.215978  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:44:32.716439  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:44:33.215609  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:44:33.715785  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:44:33.738611  507889 api_server.go:72] duration metric: took 2.523281585s to wait for apiserver process to appear ...
	I0116 03:44:33.738642  507889 api_server.go:88] waiting for apiserver healthz status ...
	I0116 03:44:33.738663  507889 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8444/healthz ...
	I0116 03:44:37.601011  507889 api_server.go:279] https://192.168.50.236:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 03:44:37.601052  507889 api_server.go:103] status: https://192.168.50.236:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 03:44:37.601072  507889 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8444/healthz ...
	I0116 03:44:37.678390  507889 api_server.go:279] https://192.168.50.236:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 03:44:37.678428  507889 api_server.go:103] status: https://192.168.50.236:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 03:44:37.739725  507889 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8444/healthz ...
	I0116 03:44:37.767384  507889 api_server.go:279] https://192.168.50.236:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 03:44:37.767425  507889 api_server.go:103] status: https://192.168.50.236:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 03:44:38.238992  507889 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8444/healthz ...
	I0116 03:44:38.253946  507889 api_server.go:279] https://192.168.50.236:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 03:44:38.253991  507889 api_server.go:103] status: https://192.168.50.236:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 03:44:38.738786  507889 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8444/healthz ...
	I0116 03:44:38.749091  507889 api_server.go:279] https://192.168.50.236:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 03:44:38.749135  507889 api_server.go:103] status: https://192.168.50.236:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 03:44:39.239814  507889 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8444/healthz ...
	I0116 03:44:39.245859  507889 api_server.go:279] https://192.168.50.236:8444/healthz returned 200:
	ok
	I0116 03:44:39.259198  507889 api_server.go:141] control plane version: v1.28.4
	I0116 03:44:39.259250  507889 api_server.go:131] duration metric: took 5.520598732s to wait for apiserver health ...
	I0116 03:44:39.259265  507889 cni.go:84] Creating CNI manager for ""
	I0116 03:44:39.259277  507889 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:44:39.261389  507889 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 03:44:34.897727  507510 retry.go:31] will retry after 6.879493351s: kubelet not initialised
	I0116 03:44:37.975671  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:39.979781  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:37.524763  507257 crio.go:444] Took 2.112278 seconds to copy over tarball
	I0116 03:44:37.524843  507257 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0116 03:44:40.706515  507257 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.181629969s)
	I0116 03:44:40.706559  507257 crio.go:451] Took 3.181765 seconds to extract the tarball
	I0116 03:44:40.706574  507257 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0116 03:44:40.751207  507257 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 03:44:40.905548  507257 crio.go:496] all images are preloaded for cri-o runtime.
	I0116 03:44:40.905578  507257 cache_images.go:84] Images are preloaded, skipping loading
	I0116 03:44:40.905659  507257 ssh_runner.go:195] Run: crio config
	I0116 03:44:40.965159  507257 cni.go:84] Creating CNI manager for ""
	I0116 03:44:40.965194  507257 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:44:40.965220  507257 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 03:44:40.965263  507257 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.159 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-615980 NodeName:embed-certs-615980 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.159"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.159 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0116 03:44:40.965474  507257 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.159
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-615980"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.159
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.159"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 03:44:40.965578  507257 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-615980 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.159
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-615980 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0116 03:44:40.965634  507257 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0116 03:44:40.976015  507257 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 03:44:40.976153  507257 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0116 03:44:40.986610  507257 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0116 03:44:41.005297  507257 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0116 03:44:41.026383  507257 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0116 03:44:41.046554  507257 ssh_runner.go:195] Run: grep 192.168.72.159	control-plane.minikube.internal$ /etc/hosts
	I0116 03:44:41.050940  507257 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.159	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 03:44:41.064516  507257 certs.go:56] Setting up /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/embed-certs-615980 for IP: 192.168.72.159
	I0116 03:44:41.064568  507257 certs.go:190] acquiring lock for shared ca certs: {Name:mkab5aeea3e0ed69f3592dc8f418e07c6075b130 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:44:41.064749  507257 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17965-468241/.minikube/ca.key
	I0116 03:44:41.064813  507257 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.key
	I0116 03:44:41.064917  507257 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/embed-certs-615980/client.key
	I0116 03:44:41.064989  507257 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/embed-certs-615980/apiserver.key.fc98a751
	I0116 03:44:41.065044  507257 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/embed-certs-615980/proxy-client.key
	I0116 03:44:41.065202  507257 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478.pem (1338 bytes)
	W0116 03:44:41.065241  507257 certs.go:433] ignoring /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478_empty.pem, impossibly tiny 0 bytes
	I0116 03:44:41.065257  507257 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca-key.pem (1679 bytes)
	I0116 03:44:41.065294  507257 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/ca.pem (1078 bytes)
	I0116 03:44:41.065331  507257 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/cert.pem (1123 bytes)
	I0116 03:44:41.065374  507257 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/certs/home/jenkins/minikube-integration/17965-468241/.minikube/certs/key.pem (1679 bytes)
	I0116 03:44:41.065432  507257 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem (1708 bytes)
	I0116 03:44:41.066147  507257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/embed-certs-615980/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0116 03:44:41.092714  507257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/embed-certs-615980/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0116 03:44:41.119109  507257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/embed-certs-615980/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0116 03:44:41.147059  507257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/embed-certs-615980/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0116 03:44:41.176357  507257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 03:44:41.202082  507257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0116 03:44:41.228263  507257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 03:44:41.252892  507257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 03:44:39.263119  507889 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 03:44:39.290175  507889 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 03:44:39.319009  507889 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 03:44:39.341195  507889 system_pods.go:59] 9 kube-system pods found
	I0116 03:44:39.341251  507889 system_pods.go:61] "coredns-5dd5756b68-f8shl" [18bddcd6-4305-4856-b590-e16c362768e3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0116 03:44:39.341264  507889 system_pods.go:61] "coredns-5dd5756b68-pmx8n" [9d3c9941-8938-4d4a-b6d8-9b542ca6f1ca] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0116 03:44:39.341280  507889 system_pods.go:61] "etcd-default-k8s-diff-port-434445" [a2327159-d034-4f62-a8f5-14062a41c75d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0116 03:44:39.341293  507889 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-434445" [75c5fc5e-86b1-42cf-bb5c-8a1f8b4e13db] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0116 03:44:39.341310  507889 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-434445" [2568dd4f-124d-4c4f-8651-3b89b2b54983] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0116 03:44:39.341323  507889 system_pods.go:61] "kube-proxy-dcbqg" [eba1f9bf-6aa7-40cd-b57c-745c2d0cc414] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0116 03:44:39.341335  507889 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-434445" [479952a1-8c3d-49b4-bd22-fef988e6b83d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0116 03:44:39.341353  507889 system_pods.go:61] "metrics-server-57f55c9bc5-894n2" [46e4892a-d026-4a9d-88bc-128e92848808] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:44:39.341369  507889 system_pods.go:61] "storage-provisioner" [16fd4585-3d75-40c3-a28d-4134375f4e3d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0116 03:44:39.341391  507889 system_pods.go:74] duration metric: took 22.354405ms to wait for pod list to return data ...
	I0116 03:44:39.341403  507889 node_conditions.go:102] verifying NodePressure condition ...
	I0116 03:44:39.349904  507889 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 03:44:39.349954  507889 node_conditions.go:123] node cpu capacity is 2
	I0116 03:44:39.349972  507889 node_conditions.go:105] duration metric: took 8.557095ms to run NodePressure ...
	I0116 03:44:39.350000  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:44:39.798882  507889 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0116 03:44:39.816480  507889 kubeadm.go:787] kubelet initialised
	I0116 03:44:39.816514  507889 kubeadm.go:788] duration metric: took 17.598017ms waiting for restarted kubelet to initialise ...
	I0116 03:44:39.816527  507889 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:44:39.834946  507889 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-f8shl" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:39.854785  507889 pod_ready.go:97] node "default-k8s-diff-port-434445" hosting pod "coredns-5dd5756b68-f8shl" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-434445" has status "Ready":"False"
	I0116 03:44:39.854832  507889 pod_ready.go:81] duration metric: took 19.846427ms waiting for pod "coredns-5dd5756b68-f8shl" in "kube-system" namespace to be "Ready" ...
	E0116 03:44:39.854846  507889 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-434445" hosting pod "coredns-5dd5756b68-f8shl" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-434445" has status "Ready":"False"
	I0116 03:44:39.854864  507889 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-pmx8n" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:39.888659  507889 pod_ready.go:97] node "default-k8s-diff-port-434445" hosting pod "coredns-5dd5756b68-pmx8n" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-434445" has status "Ready":"False"
	I0116 03:44:39.888703  507889 pod_ready.go:81] duration metric: took 33.827201ms waiting for pod "coredns-5dd5756b68-pmx8n" in "kube-system" namespace to be "Ready" ...
	E0116 03:44:39.888718  507889 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-434445" hosting pod "coredns-5dd5756b68-pmx8n" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-434445" has status "Ready":"False"
	I0116 03:44:39.888728  507889 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-434445" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:39.897638  507889 pod_ready.go:97] node "default-k8s-diff-port-434445" hosting pod "etcd-default-k8s-diff-port-434445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-434445" has status "Ready":"False"
	I0116 03:44:39.897674  507889 pod_ready.go:81] duration metric: took 8.927103ms waiting for pod "etcd-default-k8s-diff-port-434445" in "kube-system" namespace to be "Ready" ...
	E0116 03:44:39.897693  507889 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-434445" hosting pod "etcd-default-k8s-diff-port-434445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-434445" has status "Ready":"False"
	I0116 03:44:39.897701  507889 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-434445" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:39.919418  507889 pod_ready.go:97] node "default-k8s-diff-port-434445" hosting pod "kube-apiserver-default-k8s-diff-port-434445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-434445" has status "Ready":"False"
	I0116 03:44:39.919465  507889 pod_ready.go:81] duration metric: took 21.753159ms waiting for pod "kube-apiserver-default-k8s-diff-port-434445" in "kube-system" namespace to be "Ready" ...
	E0116 03:44:39.919495  507889 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-434445" hosting pod "kube-apiserver-default-k8s-diff-port-434445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-434445" has status "Ready":"False"
	I0116 03:44:39.919505  507889 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-434445" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:40.203370  507889 pod_ready.go:97] node "default-k8s-diff-port-434445" hosting pod "kube-controller-manager-default-k8s-diff-port-434445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-434445" has status "Ready":"False"
	I0116 03:44:40.203411  507889 pod_ready.go:81] duration metric: took 283.893646ms waiting for pod "kube-controller-manager-default-k8s-diff-port-434445" in "kube-system" namespace to be "Ready" ...
	E0116 03:44:40.203428  507889 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-434445" hosting pod "kube-controller-manager-default-k8s-diff-port-434445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-434445" has status "Ready":"False"
	I0116 03:44:40.203440  507889 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-dcbqg" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:41.417889  507889 pod_ready.go:97] node "default-k8s-diff-port-434445" hosting pod "kube-proxy-dcbqg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-434445" has status "Ready":"False"
	I0116 03:44:41.418011  507889 pod_ready.go:81] duration metric: took 1.214559235s waiting for pod "kube-proxy-dcbqg" in "kube-system" namespace to be "Ready" ...
	E0116 03:44:41.418033  507889 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-434445" hosting pod "kube-proxy-dcbqg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-434445" has status "Ready":"False"
	I0116 03:44:41.418043  507889 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-434445" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:41.425177  507889 pod_ready.go:97] node "default-k8s-diff-port-434445" hosting pod "kube-scheduler-default-k8s-diff-port-434445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-434445" has status "Ready":"False"
	I0116 03:44:41.425208  507889 pod_ready.go:81] duration metric: took 7.15251ms waiting for pod "kube-scheduler-default-k8s-diff-port-434445" in "kube-system" namespace to be "Ready" ...
	E0116 03:44:41.425220  507889 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-434445" hosting pod "kube-scheduler-default-k8s-diff-port-434445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-434445" has status "Ready":"False"
	I0116 03:44:41.425226  507889 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:41.431059  507889 pod_ready.go:97] node "default-k8s-diff-port-434445" hosting pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-434445" has status "Ready":"False"
	I0116 03:44:41.431103  507889 pod_ready.go:81] duration metric: took 5.869165ms waiting for pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace to be "Ready" ...
	E0116 03:44:41.431115  507889 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-434445" hosting pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-434445" has status "Ready":"False"
	I0116 03:44:41.431122  507889 pod_ready.go:38] duration metric: took 1.614582832s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:44:41.431139  507889 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0116 03:44:41.445099  507889 ops.go:34] apiserver oom_adj: -16
	I0116 03:44:41.445129  507889 kubeadm.go:640] restartCluster took 21.83447374s
	I0116 03:44:41.445141  507889 kubeadm.go:406] StartCluster complete in 21.896543184s
	I0116 03:44:41.445168  507889 settings.go:142] acquiring lock: {Name:mk3ded99c06d285b174e76622bb0741c8dd2d8fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:44:41.445265  507889 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17965-468241/kubeconfig
	I0116 03:44:41.447590  507889 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-468241/kubeconfig: {Name:mkac3c63bbcba9b4fa1bea3480797757d703fb9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:44:41.544520  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0116 03:44:41.544743  507889 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0116 03:44:41.544842  507889 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-434445"
	I0116 03:44:41.544858  507889 config.go:182] Loaded profile config "default-k8s-diff-port-434445": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 03:44:41.544875  507889 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-434445"
	I0116 03:44:41.544891  507889 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-434445"
	W0116 03:44:41.544899  507889 addons.go:243] addon metrics-server should already be in state true
	I0116 03:44:41.544865  507889 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-434445"
	W0116 03:44:41.544915  507889 addons.go:243] addon storage-provisioner should already be in state true
	I0116 03:44:41.544971  507889 host.go:66] Checking if "default-k8s-diff-port-434445" exists ...
	I0116 03:44:41.544973  507889 host.go:66] Checking if "default-k8s-diff-port-434445" exists ...
	I0116 03:44:41.544862  507889 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-434445"
	I0116 03:44:41.545107  507889 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-434445"
	I0116 03:44:41.545473  507889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:44:41.545479  507889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:44:41.545505  507889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:44:41.545673  507889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:44:41.562983  507889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44387
	I0116 03:44:41.562984  507889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35715
	I0116 03:44:41.563677  507889 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:44:41.563684  507889 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:44:41.564352  507889 main.go:141] libmachine: Using API Version  1
	I0116 03:44:41.564382  507889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:44:41.564540  507889 main.go:141] libmachine: Using API Version  1
	I0116 03:44:41.564569  507889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:44:41.564753  507889 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:44:41.564937  507889 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:44:41.565113  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetState
	I0116 03:44:41.565350  507889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:44:41.565418  507889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:44:41.569050  507889 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-434445"
	W0116 03:44:41.569091  507889 addons.go:243] addon default-storageclass should already be in state true
	I0116 03:44:41.569125  507889 host.go:66] Checking if "default-k8s-diff-port-434445" exists ...
	I0116 03:44:41.569554  507889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:44:41.569613  507889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:44:41.584107  507889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33349
	I0116 03:44:41.584756  507889 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:44:41.585422  507889 main.go:141] libmachine: Using API Version  1
	I0116 03:44:41.585457  507889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:44:41.585634  507889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42645
	I0116 03:44:41.585856  507889 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:44:41.586123  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetState
	I0116 03:44:41.586162  507889 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:44:41.586636  507889 main.go:141] libmachine: Using API Version  1
	I0116 03:44:41.586663  507889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:44:41.587080  507889 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:44:41.587688  507889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:44:41.587743  507889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:44:41.588214  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .DriverName
	I0116 03:44:41.606456  507889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40289
	I0116 03:44:41.644090  507889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:44:41.819945  507889 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:44:41.929214  507889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:44:41.929680  507889 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:44:42.246642  507889 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 03:44:42.246665  507889 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0116 03:44:42.246696  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHHostname
	I0116 03:44:42.247294  507889 main.go:141] libmachine: Using API Version  1
	I0116 03:44:42.247332  507889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:44:42.247740  507889 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:44:42.247987  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetState
	I0116 03:44:42.250254  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .DriverName
	I0116 03:44:42.250570  507889 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0116 03:44:42.250588  507889 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0116 03:44:42.250609  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHHostname
	I0116 03:44:42.251130  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:42.251863  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ea:d5", ip: ""} in network mk-default-k8s-diff-port-434445: {Iface:virbr1 ExpiryTime:2024-01-16 04:44:01 +0000 UTC Type:0 Mac:52:54:00:78:ea:d5 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:default-k8s-diff-port-434445 Clientid:01:52:54:00:78:ea:d5}
	I0116 03:44:42.251896  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined IP address 192.168.50.236 and MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:42.252245  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHPort
	I0116 03:44:42.252473  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHKeyPath
	I0116 03:44:42.252680  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHUsername
	I0116 03:44:42.252842  507889 sshutil.go:53] new ssh client: &{IP:192.168.50.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/default-k8s-diff-port-434445/id_rsa Username:docker}
	I0116 03:44:42.254224  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:42.254837  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ea:d5", ip: ""} in network mk-default-k8s-diff-port-434445: {Iface:virbr1 ExpiryTime:2024-01-16 04:44:01 +0000 UTC Type:0 Mac:52:54:00:78:ea:d5 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:default-k8s-diff-port-434445 Clientid:01:52:54:00:78:ea:d5}
	I0116 03:44:42.254872  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined IP address 192.168.50.236 and MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:42.255050  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHPort
	I0116 03:44:42.255240  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHKeyPath
	I0116 03:44:42.255422  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHUsername
	I0116 03:44:42.255585  507889 sshutil.go:53] new ssh client: &{IP:192.168.50.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/default-k8s-diff-port-434445/id_rsa Username:docker}
	I0116 03:44:42.264367  507889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36555
	I0116 03:44:42.264832  507889 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:44:42.265322  507889 main.go:141] libmachine: Using API Version  1
	I0116 03:44:42.265352  507889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:44:42.265700  507889 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:44:42.266266  507889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:44:42.266306  507889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:44:42.281852  507889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32929
	I0116 03:44:42.282351  507889 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:44:42.282914  507889 main.go:141] libmachine: Using API Version  1
	I0116 03:44:42.282944  507889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:44:42.283363  507889 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:44:42.283599  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetState
	I0116 03:44:42.285584  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .DriverName
	I0116 03:44:42.395709  507889 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0116 03:44:42.398672  507889 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 03:44:42.493544  507889 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0116 03:44:42.531626  507889 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0116 03:44:42.531683  507889 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0116 03:44:42.531717  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHHostname
	I0116 03:44:42.535980  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:42.536575  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:ea:d5", ip: ""} in network mk-default-k8s-diff-port-434445: {Iface:virbr1 ExpiryTime:2024-01-16 04:44:01 +0000 UTC Type:0 Mac:52:54:00:78:ea:d5 Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:default-k8s-diff-port-434445 Clientid:01:52:54:00:78:ea:d5}
	I0116 03:44:42.536604  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | domain default-k8s-diff-port-434445 has defined IP address 192.168.50.236 and MAC address 52:54:00:78:ea:d5 in network mk-default-k8s-diff-port-434445
	I0116 03:44:42.537018  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHPort
	I0116 03:44:42.537286  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHKeyPath
	I0116 03:44:42.537510  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .GetSSHUsername
	I0116 03:44:42.537850  507889 sshutil.go:53] new ssh client: &{IP:192.168.50.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/default-k8s-diff-port-434445/id_rsa Username:docker}
	I0116 03:44:42.545910  507889 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.001352094s)
	I0116 03:44:42.545983  507889 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0116 03:44:42.713693  507889 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0116 03:44:42.713718  507889 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0116 03:44:42.752674  507889 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0116 03:44:42.752717  507889 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0116 03:44:42.790178  507889 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 03:44:42.790214  507889 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0116 03:44:42.825256  507889 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 03:44:43.010741  507889 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-434445" context rescaled to 1 replicas
	I0116 03:44:43.010801  507889 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.236 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 03:44:43.014031  507889 out.go:177] * Verifying Kubernetes components...
	I0116 03:44:43.016143  507889 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:44:44.415462  507889 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.921726194s)
	I0116 03:44:44.415532  507889 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.921908068s)
	I0116 03:44:44.415547  507889 main.go:141] libmachine: Making call to close driver server
	I0116 03:44:44.415631  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .Close
	I0116 03:44:44.415579  507889 main.go:141] libmachine: Making call to close driver server
	I0116 03:44:44.415854  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .Close
	I0116 03:44:44.416266  507889 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:44:44.416376  507889 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:44:44.416393  507889 main.go:141] libmachine: Making call to close driver server
	I0116 03:44:44.416424  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .Close
	I0116 03:44:44.416310  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | Closing plugin on server side
	I0116 03:44:44.416310  507889 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:44:44.416595  507889 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:44:44.416658  507889 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:44:44.416671  507889 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:44:44.416977  507889 main.go:141] libmachine: Making call to close driver server
	I0116 03:44:44.417014  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .Close
	I0116 03:44:44.416332  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | Closing plugin on server side
	I0116 03:44:44.417305  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | Closing plugin on server side
	I0116 03:44:44.417358  507889 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:44:44.417375  507889 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:44:44.450870  507889 main.go:141] libmachine: Making call to close driver server
	I0116 03:44:44.450908  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .Close
	I0116 03:44:44.451327  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | Closing plugin on server side
	I0116 03:44:44.451367  507889 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:44:44.451378  507889 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:44:44.496654  507889 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.671338305s)
	I0116 03:44:44.496732  507889 main.go:141] libmachine: Making call to close driver server
	I0116 03:44:44.496744  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .Close
	I0116 03:44:44.496678  507889 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.480503621s)
	I0116 03:44:44.496845  507889 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-434445" to be "Ready" ...
	I0116 03:44:44.497092  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | Closing plugin on server side
	I0116 03:44:44.497088  507889 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:44:44.497166  507889 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:44:44.497188  507889 main.go:141] libmachine: Making call to close driver server
	I0116 03:44:44.497198  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) Calling .Close
	I0116 03:44:44.497445  507889 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:44:44.497489  507889 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:44:44.497499  507889 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-434445"
	I0116 03:44:44.497502  507889 main.go:141] libmachine: (default-k8s-diff-port-434445) DBG | Closing plugin on server side
	I0116 03:44:44.500234  507889 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0116 03:44:42.355473  507510 retry.go:31] will retry after 6.423018357s: kubelet not initialised
	I0116 03:44:42.543045  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:44.974520  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:41.280410  507257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/ssl/certs/4754782.pem --> /usr/share/ca-certificates/4754782.pem (1708 bytes)
	I0116 03:44:41.488388  507257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 03:44:41.515741  507257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-468241/.minikube/certs/475478.pem --> /usr/share/ca-certificates/475478.pem (1338 bytes)
	I0116 03:44:41.541744  507257 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0116 03:44:41.564056  507257 ssh_runner.go:195] Run: openssl version
	I0116 03:44:41.571197  507257 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 03:44:41.586430  507257 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:44:41.592334  507257 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 02:35 /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:44:41.592405  507257 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:44:41.599013  507257 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 03:44:41.612793  507257 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/475478.pem && ln -fs /usr/share/ca-certificates/475478.pem /etc/ssl/certs/475478.pem"
	I0116 03:44:41.624554  507257 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/475478.pem
	I0116 03:44:41.629558  507257 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 02:44 /usr/share/ca-certificates/475478.pem
	I0116 03:44:41.629643  507257 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/475478.pem
	I0116 03:44:41.635518  507257 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/475478.pem /etc/ssl/certs/51391683.0"
	I0116 03:44:41.649567  507257 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4754782.pem && ln -fs /usr/share/ca-certificates/4754782.pem /etc/ssl/certs/4754782.pem"
	I0116 03:44:41.662276  507257 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4754782.pem
	I0116 03:44:41.667618  507257 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 02:44 /usr/share/ca-certificates/4754782.pem
	I0116 03:44:41.667699  507257 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4754782.pem
	I0116 03:44:41.678158  507257 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4754782.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 03:44:41.692147  507257 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 03:44:41.698226  507257 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0116 03:44:41.706563  507257 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0116 03:44:41.713387  507257 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0116 03:44:41.721243  507257 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0116 03:44:41.728346  507257 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0116 03:44:41.735446  507257 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0116 03:44:41.743670  507257 kubeadm.go:404] StartCluster: {Name:embed-certs-615980 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-615980 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.159 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 03:44:41.743786  507257 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0116 03:44:41.743860  507257 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 03:44:41.799605  507257 cri.go:89] found id: ""
	I0116 03:44:41.799700  507257 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0116 03:44:41.812356  507257 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0116 03:44:41.812388  507257 kubeadm.go:636] restartCluster start
	I0116 03:44:41.812457  507257 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0116 03:44:41.823906  507257 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:41.825131  507257 kubeconfig.go:92] found "embed-certs-615980" server: "https://192.168.72.159:8443"
	I0116 03:44:41.827484  507257 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0116 03:44:41.838289  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:41.838386  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:41.852927  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:42.338430  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:42.338548  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:42.353029  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:42.838419  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:42.838526  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:42.854254  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:43.338802  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:43.338934  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:43.356427  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:43.839009  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:43.839103  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:43.853265  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:44.338711  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:44.338803  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:44.353364  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:44.838956  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:44.839070  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:44.851711  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:45.339282  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:45.339397  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:45.354275  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:45.838803  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:45.838899  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:45.853557  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:44.501958  507889 addons.go:505] enable addons completed in 2.957229306s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0116 03:44:46.502807  507889 node_ready.go:58] node "default-k8s-diff-port-434445" has status "Ready":"False"
	I0116 03:44:48.786485  507510 retry.go:31] will retry after 18.441149821s: kubelet not initialised
	I0116 03:44:46.975660  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:48.981964  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:46.339198  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:46.339328  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:46.356092  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:46.839356  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:46.839461  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:46.857070  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:47.338405  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:47.338546  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:47.354976  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:47.839369  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:47.839468  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:47.854465  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:48.339102  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:48.339217  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:48.352361  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:48.838853  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:48.838968  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:48.853271  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:49.338643  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:49.338751  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:49.353674  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:49.839214  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:49.839309  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:49.852699  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:50.339060  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:50.339186  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:50.353143  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:50.838646  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:50.838782  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:50.852767  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:48.005726  507889 node_ready.go:49] node "default-k8s-diff-port-434445" has status "Ready":"True"
	I0116 03:44:48.005760  507889 node_ready.go:38] duration metric: took 3.508890685s waiting for node "default-k8s-diff-port-434445" to be "Ready" ...
	I0116 03:44:48.005775  507889 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:44:48.015385  507889 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-f8shl" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:48.027358  507889 pod_ready.go:92] pod "coredns-5dd5756b68-f8shl" in "kube-system" namespace has status "Ready":"True"
	I0116 03:44:48.027383  507889 pod_ready.go:81] duration metric: took 11.966322ms waiting for pod "coredns-5dd5756b68-f8shl" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:48.027397  507889 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-pmx8n" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:48.034156  507889 pod_ready.go:92] pod "coredns-5dd5756b68-pmx8n" in "kube-system" namespace has status "Ready":"True"
	I0116 03:44:48.034179  507889 pod_ready.go:81] duration metric: took 6.775784ms waiting for pod "coredns-5dd5756b68-pmx8n" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:48.034188  507889 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-434445" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:48.039933  507889 pod_ready.go:92] pod "etcd-default-k8s-diff-port-434445" in "kube-system" namespace has status "Ready":"True"
	I0116 03:44:48.039954  507889 pod_ready.go:81] duration metric: took 5.758946ms waiting for pod "etcd-default-k8s-diff-port-434445" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:48.039964  507889 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-434445" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:48.045351  507889 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-434445" in "kube-system" namespace has status "Ready":"True"
	I0116 03:44:48.045376  507889 pod_ready.go:81] duration metric: took 5.405684ms waiting for pod "kube-apiserver-default-k8s-diff-port-434445" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:48.045386  507889 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-434445" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:48.413479  507889 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-434445" in "kube-system" namespace has status "Ready":"True"
	I0116 03:44:48.413508  507889 pod_ready.go:81] duration metric: took 368.114361ms waiting for pod "kube-controller-manager-default-k8s-diff-port-434445" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:48.413522  507889 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dcbqg" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:48.808095  507889 pod_ready.go:92] pod "kube-proxy-dcbqg" in "kube-system" namespace has status "Ready":"True"
	I0116 03:44:48.808132  507889 pod_ready.go:81] duration metric: took 394.600854ms waiting for pod "kube-proxy-dcbqg" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:48.808147  507889 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-434445" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:50.817248  507889 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-434445" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:51.474904  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:53.475529  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:55.475807  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:51.339105  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:51.339225  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:51.352821  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:51.838856  507257 api_server.go:166] Checking apiserver status ...
	I0116 03:44:51.838985  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0116 03:44:51.852211  507257 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:44:51.852258  507257 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0116 03:44:51.852271  507257 kubeadm.go:1135] stopping kube-system containers ...
	I0116 03:44:51.852289  507257 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0116 03:44:51.852360  507257 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 03:44:51.897049  507257 cri.go:89] found id: ""
	I0116 03:44:51.897139  507257 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0116 03:44:51.915124  507257 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 03:44:51.926221  507257 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 03:44:51.926311  507257 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 03:44:51.938314  507257 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0116 03:44:51.938358  507257 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:44:52.077173  507257 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:44:52.733999  507257 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:44:52.971172  507257 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:44:53.063705  507257 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:44:53.200256  507257 api_server.go:52] waiting for apiserver process to appear ...
	I0116 03:44:53.200364  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:44:53.701337  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:44:54.201266  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:44:54.700485  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:44:55.200720  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:44:55.701348  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:44:55.725792  507257 api_server.go:72] duration metric: took 2.52553608s to wait for apiserver process to appear ...
	I0116 03:44:55.725826  507257 api_server.go:88] waiting for apiserver healthz status ...
	I0116 03:44:55.725851  507257 api_server.go:253] Checking apiserver healthz at https://192.168.72.159:8443/healthz ...
	I0116 03:44:52.317689  507889 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-434445" in "kube-system" namespace has status "Ready":"True"
	I0116 03:44:52.317718  507889 pod_ready.go:81] duration metric: took 3.509561404s waiting for pod "kube-scheduler-default-k8s-diff-port-434445" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:52.317731  507889 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:54.326412  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:56.327634  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:57.974017  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:59.977499  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:44:59.850423  507257 api_server.go:279] https://192.168.72.159:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 03:44:59.850456  507257 api_server.go:103] status: https://192.168.72.159:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 03:44:59.850471  507257 api_server.go:253] Checking apiserver healthz at https://192.168.72.159:8443/healthz ...
	I0116 03:44:59.998251  507257 api_server.go:279] https://192.168.72.159:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 03:44:59.998310  507257 api_server.go:103] status: https://192.168.72.159:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 03:45:00.226594  507257 api_server.go:253] Checking apiserver healthz at https://192.168.72.159:8443/healthz ...
	I0116 03:45:00.233826  507257 api_server.go:279] https://192.168.72.159:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 03:45:00.233876  507257 api_server.go:103] status: https://192.168.72.159:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 03:45:00.726919  507257 api_server.go:253] Checking apiserver healthz at https://192.168.72.159:8443/healthz ...
	I0116 03:45:00.732711  507257 api_server.go:279] https://192.168.72.159:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 03:45:00.732748  507257 api_server.go:103] status: https://192.168.72.159:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 03:45:01.226693  507257 api_server.go:253] Checking apiserver healthz at https://192.168.72.159:8443/healthz ...
	I0116 03:45:01.232420  507257 api_server.go:279] https://192.168.72.159:8443/healthz returned 200:
	ok
	I0116 03:45:01.242029  507257 api_server.go:141] control plane version: v1.28.4
	I0116 03:45:01.242078  507257 api_server.go:131] duration metric: took 5.516243097s to wait for apiserver health ...
	I0116 03:45:01.242092  507257 cni.go:84] Creating CNI manager for ""
	I0116 03:45:01.242101  507257 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:45:01.244395  507257 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 03:45:01.246155  507257 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 03:44:58.827760  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:01.327190  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:02.475858  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:04.974991  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:01.270205  507257 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 03:45:01.350402  507257 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 03:45:01.384475  507257 system_pods.go:59] 8 kube-system pods found
	I0116 03:45:01.384536  507257 system_pods.go:61] "coredns-5dd5756b68-ddjkl" [fe342d2a-7d12-4b37-be29-c0d77b920964] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0116 03:45:01.384549  507257 system_pods.go:61] "etcd-embed-certs-615980" [7b7af2e1-b3bb-4c47-862b-838167453939] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0116 03:45:01.384562  507257 system_pods.go:61] "kube-apiserver-embed-certs-615980" [bb883c31-8391-467f-9b4a-affb05a56d49] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0116 03:45:01.384571  507257 system_pods.go:61] "kube-controller-manager-embed-certs-615980" [74f7c5e3-818c-4e15-b693-d4f81308bf9d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0116 03:45:01.384584  507257 system_pods.go:61] "kube-proxy-6jpr7" [e62c9202-8b4f-4fe7-8aa4-b931afd4b028] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0116 03:45:01.384602  507257 system_pods.go:61] "kube-scheduler-embed-certs-615980" [f03d5c9c-af6a-437b-92bb-7c5a46259bbd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0116 03:45:01.384618  507257 system_pods.go:61] "metrics-server-57f55c9bc5-48gnw" [1fcb32b6-f985-428d-8f02-1198d704d8c7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:45:01.384632  507257 system_pods.go:61] "storage-provisioner" [6264adaa-89e8-4f0d-9394-d7325338a2f5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0116 03:45:01.384642  507257 system_pods.go:74] duration metric: took 34.114711ms to wait for pod list to return data ...
	I0116 03:45:01.384656  507257 node_conditions.go:102] verifying NodePressure condition ...
	I0116 03:45:01.392555  507257 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 03:45:01.392597  507257 node_conditions.go:123] node cpu capacity is 2
	I0116 03:45:01.392614  507257 node_conditions.go:105] duration metric: took 7.946538ms to run NodePressure ...
	I0116 03:45:01.392644  507257 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:45:01.788178  507257 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0116 03:45:01.795913  507257 kubeadm.go:787] kubelet initialised
	I0116 03:45:01.795945  507257 kubeadm.go:788] duration metric: took 7.737644ms waiting for restarted kubelet to initialise ...
	I0116 03:45:01.795955  507257 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:45:01.806433  507257 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-ddjkl" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:03.815645  507257 pod_ready.go:102] pod "coredns-5dd5756b68-ddjkl" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:05.821193  507257 pod_ready.go:92] pod "coredns-5dd5756b68-ddjkl" in "kube-system" namespace has status "Ready":"True"
	I0116 03:45:05.821231  507257 pod_ready.go:81] duration metric: took 4.014760393s waiting for pod "coredns-5dd5756b68-ddjkl" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:05.821245  507257 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-615980" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:03.825695  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:05.826742  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:07.234109  507510 kubeadm.go:787] kubelet initialised
	I0116 03:45:07.234137  507510 kubeadm.go:788] duration metric: took 45.657540747s waiting for restarted kubelet to initialise ...
	I0116 03:45:07.234145  507510 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:45:07.239858  507510 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-7q4wc" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:07.247210  507510 pod_ready.go:92] pod "coredns-5644d7b6d9-7q4wc" in "kube-system" namespace has status "Ready":"True"
	I0116 03:45:07.247237  507510 pod_ready.go:81] duration metric: took 7.336988ms waiting for pod "coredns-5644d7b6d9-7q4wc" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:07.247249  507510 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-q2gxq" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:07.252865  507510 pod_ready.go:92] pod "coredns-5644d7b6d9-q2gxq" in "kube-system" namespace has status "Ready":"True"
	I0116 03:45:07.252900  507510 pod_ready.go:81] duration metric: took 5.642204ms waiting for pod "coredns-5644d7b6d9-q2gxq" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:07.252925  507510 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-696770" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:07.259169  507510 pod_ready.go:92] pod "etcd-old-k8s-version-696770" in "kube-system" namespace has status "Ready":"True"
	I0116 03:45:07.259193  507510 pod_ready.go:81] duration metric: took 6.260142ms waiting for pod "etcd-old-k8s-version-696770" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:07.259202  507510 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-696770" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:07.264591  507510 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-696770" in "kube-system" namespace has status "Ready":"True"
	I0116 03:45:07.264622  507510 pod_ready.go:81] duration metric: took 5.411866ms waiting for pod "kube-apiserver-old-k8s-version-696770" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:07.264635  507510 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-696770" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:07.632057  507510 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-696770" in "kube-system" namespace has status "Ready":"True"
	I0116 03:45:07.632093  507510 pod_ready.go:81] duration metric: took 367.447202ms waiting for pod "kube-controller-manager-old-k8s-version-696770" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:07.632110  507510 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-9pfdj" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:08.033002  507510 pod_ready.go:92] pod "kube-proxy-9pfdj" in "kube-system" namespace has status "Ready":"True"
	I0116 03:45:08.033028  507510 pod_ready.go:81] duration metric: took 400.910907ms waiting for pod "kube-proxy-9pfdj" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:08.033038  507510 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-696770" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:08.433134  507510 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-696770" in "kube-system" namespace has status "Ready":"True"
	I0116 03:45:08.433165  507510 pod_ready.go:81] duration metric: took 400.1203ms waiting for pod "kube-scheduler-old-k8s-version-696770" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:08.433180  507510 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:07.485372  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:09.979593  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:07.830703  507257 pod_ready.go:102] pod "etcd-embed-certs-615980" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:10.328466  507257 pod_ready.go:102] pod "etcd-embed-certs-615980" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:08.325925  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:10.825155  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:10.442598  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:12.941713  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:12.478975  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:14.480154  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:12.329199  507257 pod_ready.go:102] pod "etcd-embed-certs-615980" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:13.830177  507257 pod_ready.go:92] pod "etcd-embed-certs-615980" in "kube-system" namespace has status "Ready":"True"
	I0116 03:45:13.830207  507257 pod_ready.go:81] duration metric: took 8.008954008s waiting for pod "etcd-embed-certs-615980" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:13.830217  507257 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-615980" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:13.837420  507257 pod_ready.go:92] pod "kube-apiserver-embed-certs-615980" in "kube-system" namespace has status "Ready":"True"
	I0116 03:45:13.837448  507257 pod_ready.go:81] duration metric: took 7.22323ms waiting for pod "kube-apiserver-embed-certs-615980" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:13.837461  507257 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-615980" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:13.845996  507257 pod_ready.go:92] pod "kube-controller-manager-embed-certs-615980" in "kube-system" namespace has status "Ready":"True"
	I0116 03:45:13.846029  507257 pod_ready.go:81] duration metric: took 8.558317ms waiting for pod "kube-controller-manager-embed-certs-615980" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:13.846040  507257 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-6jpr7" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:13.852645  507257 pod_ready.go:92] pod "kube-proxy-6jpr7" in "kube-system" namespace has status "Ready":"True"
	I0116 03:45:13.852674  507257 pod_ready.go:81] duration metric: took 6.627181ms waiting for pod "kube-proxy-6jpr7" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:13.852683  507257 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-615980" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:13.858818  507257 pod_ready.go:92] pod "kube-scheduler-embed-certs-615980" in "kube-system" namespace has status "Ready":"True"
	I0116 03:45:13.858844  507257 pod_ready.go:81] duration metric: took 6.154319ms waiting for pod "kube-scheduler-embed-certs-615980" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:13.858853  507257 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace to be "Ready" ...
	I0116 03:45:15.867133  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:12.826463  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:14.826507  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:14.942079  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:17.442566  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:16.976095  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:19.477899  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:17.868381  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:20.367064  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:17.326184  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:19.328194  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:19.942113  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:21.942853  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:24.441140  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:21.975337  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:24.474400  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:22.368008  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:24.866716  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:21.825428  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:23.825828  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:25.829356  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:26.441756  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:28.443869  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:26.475939  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:28.476308  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:26.866760  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:29.367575  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:28.326756  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:30.825813  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:30.942631  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:33.440480  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:30.975870  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:33.475828  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:31.866401  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:33.867719  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:33.325388  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:35.325485  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:35.939804  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:37.940883  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:35.974504  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:37.975857  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:39.977413  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:36.367513  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:38.865702  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:40.866834  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:37.325804  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:39.326635  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:40.440287  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:42.440838  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:44.441037  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:42.475940  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:44.981122  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:42.867673  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:45.368285  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:41.825982  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:43.826700  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:45.828002  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:46.443286  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:48.941625  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:47.474621  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:49.475149  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:47.867135  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:49.867865  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:48.326035  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:50.327538  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:50.943718  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:53.443986  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:51.977212  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:54.477161  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:52.368444  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:54.375089  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:52.826163  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:55.327160  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:55.940561  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:57.942988  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:56.975470  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:58.975829  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:56.867648  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:59.367479  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:57.826140  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:45:59.826286  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:00.440963  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:02.941202  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:00.979308  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:03.474099  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:05.478535  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:01.868806  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:04.368227  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:01.826702  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:04.325060  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:06.326882  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:05.441837  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:07.444944  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:07.975344  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:09.975486  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:06.868137  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:09.367752  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:08.329967  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:10.826182  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:09.940745  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:11.942989  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:14.441331  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:11.977171  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:13.977835  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:11.866817  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:13.867951  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:13.327232  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:15.826862  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:16.442525  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:18.442754  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:16.475367  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:18.475903  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:16.367830  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:18.368100  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:20.866302  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:18.326376  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:20.827236  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:20.940998  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:22.941332  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:20.980371  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:23.476451  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:22.868945  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:25.366857  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:23.326576  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:25.826000  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:25.442029  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:27.941061  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:25.974860  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:27.975178  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:29.978092  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:27.370097  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:29.869827  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:28.326735  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:30.826672  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:30.442579  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:32.941784  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:32.475984  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:34.973934  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:31.870772  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:34.367380  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:32.827910  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:34.828185  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:35.440418  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:37.441206  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:39.441254  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:36.974076  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:38.975169  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:36.867231  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:39.366005  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:37.327553  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:39.826218  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:41.941046  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:43.941530  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:40.976023  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:43.478194  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:41.367293  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:43.867097  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:45.867843  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:42.325426  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:44.325723  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:46.326155  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:46.441175  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:48.940677  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:45.974937  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:47.975141  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:50.474687  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:47.868006  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:49.868890  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:48.326634  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:50.326914  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:50.941220  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:53.440868  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:52.475138  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:54.475546  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:52.365917  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:54.366514  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:52.826279  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:55.324177  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:55.441130  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:57.943093  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:56.976380  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:59.478090  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:56.368894  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:58.868051  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:57.326296  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:46:59.326416  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:01.327894  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:00.440504  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:02.441176  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:04.442171  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:01.975498  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:03.978490  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:01.369736  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:03.871663  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:03.825943  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:05.828215  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:06.443721  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:08.940212  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:06.475354  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:08.975707  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:06.366468  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:08.366998  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:10.368019  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:08.326243  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:10.824873  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:10.942042  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:13.440495  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:11.475551  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:13.475904  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:12.867030  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:14.872409  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:12.826040  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:15.325658  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:15.941844  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:18.440574  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:15.975125  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:17.977326  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:20.474897  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:17.367390  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:19.369090  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:17.325860  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:19.829310  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:20.940407  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:22.941824  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:22.475218  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:24.477773  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:21.866953  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:23.867055  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:22.326660  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:24.327689  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:25.441214  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:27.442253  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:26.975120  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:29.477805  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:26.367295  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:28.867376  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:26.826666  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:29.327606  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:29.940650  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:31.941021  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:34.443144  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:31.978544  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:34.475301  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:31.367770  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:33.867084  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:35.870968  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:31.826565  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:34.326677  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:36.941363  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:38.942121  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:36.974797  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:38.975027  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:38.368025  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:40.866714  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:36.828347  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:39.327130  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:41.441555  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:43.442806  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:40.977172  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:43.476163  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:43.367966  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:45.867460  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:41.826087  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:43.826389  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:46.326497  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:45.941267  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:48.443875  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:45.974452  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:47.977610  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:50.475536  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:48.367053  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:50.368023  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:48.824924  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:50.825835  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:50.941125  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:52.941644  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:52.975726  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:55.476453  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:52.866871  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:55.367951  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:52.826166  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:54.826434  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:55.442084  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:57.442829  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:57.974382  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:59.974448  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:57.867742  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:00.366490  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:57.325608  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:59.825525  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:47:59.939515  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:01.941648  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:03.942290  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:01.975159  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:03.977002  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:02.366764  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:04.366818  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:01.831740  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:04.326341  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:06.440494  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:08.940336  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:06.475364  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:08.482783  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:06.367160  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:08.867294  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:06.825331  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:08.826594  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:11.324828  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:10.942696  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:13.441805  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:10.974798  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:12.975009  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:14.976154  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:11.366189  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:13.369852  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:15.867536  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:13.327353  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:15.825738  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:15.941304  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:17.942206  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:17.474204  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:19.475630  507339 pod_ready.go:102] pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:19.974269  507339 pod_ready.go:81] duration metric: took 4m0.007375913s waiting for pod "metrics-server-57f55c9bc5-78vfj" in "kube-system" namespace to be "Ready" ...
	E0116 03:48:19.974299  507339 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0116 03:48:19.974310  507339 pod_ready.go:38] duration metric: took 4m6.26032663s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:48:19.974365  507339 api_server.go:52] waiting for apiserver process to appear ...
	I0116 03:48:19.974431  507339 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0116 03:48:19.974529  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0116 03:48:20.042853  507339 cri.go:89] found id: "de79f87bc28449077936386f603cc5df933f63455bc96cf6ff7e7c6e4bd32bc4"
	I0116 03:48:20.042886  507339 cri.go:89] found id: ""
	I0116 03:48:20.042896  507339 logs.go:284] 1 containers: [de79f87bc28449077936386f603cc5df933f63455bc96cf6ff7e7c6e4bd32bc4]
	I0116 03:48:20.042961  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:20.049795  507339 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0116 03:48:20.049884  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0116 03:48:20.092507  507339 cri.go:89] found id: "01aaf51cd40b9137f2395404bb15b7372927f81dc8a4e884600dc8cba9bbeb8e"
	I0116 03:48:20.092541  507339 cri.go:89] found id: ""
	I0116 03:48:20.092551  507339 logs.go:284] 1 containers: [01aaf51cd40b9137f2395404bb15b7372927f81dc8a4e884600dc8cba9bbeb8e]
	I0116 03:48:20.092619  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:20.097091  507339 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0116 03:48:20.097176  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0116 03:48:20.139182  507339 cri.go:89] found id: "c13ef036a1014a6bbc5c179a0889fb6ff615988713589da58240b7c637adf687"
	I0116 03:48:20.139218  507339 cri.go:89] found id: ""
	I0116 03:48:20.139229  507339 logs.go:284] 1 containers: [c13ef036a1014a6bbc5c179a0889fb6ff615988713589da58240b7c637adf687]
	I0116 03:48:20.139297  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:20.145129  507339 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0116 03:48:20.145210  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0116 03:48:20.191055  507339 cri.go:89] found id: "33381edd7dded1b5ed158738429b4a7f1f03a6171f2af0832bb5a9864c950725"
	I0116 03:48:20.191090  507339 cri.go:89] found id: ""
	I0116 03:48:20.191098  507339 logs.go:284] 1 containers: [33381edd7dded1b5ed158738429b4a7f1f03a6171f2af0832bb5a9864c950725]
	I0116 03:48:20.191161  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:20.195688  507339 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0116 03:48:20.195765  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0116 03:48:20.242718  507339 cri.go:89] found id: "eba2964f029ac61d9cfb59dc548ae05c832f9b23713f51924291fbfc5de985ed"
	I0116 03:48:20.242746  507339 cri.go:89] found id: ""
	I0116 03:48:20.242754  507339 logs.go:284] 1 containers: [eba2964f029ac61d9cfb59dc548ae05c832f9b23713f51924291fbfc5de985ed]
	I0116 03:48:20.242819  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:20.247312  507339 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0116 03:48:20.247399  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0116 03:48:20.287981  507339 cri.go:89] found id: "802d4c55aa04370ff079f09dfe4670f5dac1142ca6db880afa17be75f1d20a76"
	I0116 03:48:20.288009  507339 cri.go:89] found id: ""
	I0116 03:48:20.288018  507339 logs.go:284] 1 containers: [802d4c55aa04370ff079f09dfe4670f5dac1142ca6db880afa17be75f1d20a76]
	I0116 03:48:20.288097  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:20.292370  507339 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0116 03:48:20.292449  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0116 03:48:20.335778  507339 cri.go:89] found id: ""
	I0116 03:48:20.335816  507339 logs.go:284] 0 containers: []
	W0116 03:48:20.335828  507339 logs.go:286] No container was found matching "kindnet"
	I0116 03:48:20.335838  507339 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0116 03:48:20.335906  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0116 03:48:20.381698  507339 cri.go:89] found id: "b7164c1b7732cf6d54642cb9ffc8d9f16696adc0e428c3fa16cf914420d73e1d"
	I0116 03:48:20.381722  507339 cri.go:89] found id: "59754e94eb3cf3fecfedb9077fbad2ecc2618c351fa9bfa3faed0d780fdd9ecb"
	I0116 03:48:20.381727  507339 cri.go:89] found id: ""
	I0116 03:48:20.381734  507339 logs.go:284] 2 containers: [b7164c1b7732cf6d54642cb9ffc8d9f16696adc0e428c3fa16cf914420d73e1d 59754e94eb3cf3fecfedb9077fbad2ecc2618c351fa9bfa3faed0d780fdd9ecb]
	I0116 03:48:20.381790  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:20.386880  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:20.391292  507339 logs.go:123] Gathering logs for describe nodes ...
	I0116 03:48:20.391324  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0116 03:48:20.528154  507339 logs.go:123] Gathering logs for storage-provisioner [b7164c1b7732cf6d54642cb9ffc8d9f16696adc0e428c3fa16cf914420d73e1d] ...
	I0116 03:48:20.528197  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b7164c1b7732cf6d54642cb9ffc8d9f16696adc0e428c3fa16cf914420d73e1d"
	I0116 03:48:20.586645  507339 logs.go:123] Gathering logs for CRI-O ...
	I0116 03:48:20.586680  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0116 03:48:18.367415  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:20.867678  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:18.325849  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:20.326141  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:20.442138  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:22.442180  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:21.096109  507339 logs.go:123] Gathering logs for kubelet ...
	I0116 03:48:21.096155  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0116 03:48:21.154531  507339 logs.go:123] Gathering logs for coredns [c13ef036a1014a6bbc5c179a0889fb6ff615988713589da58240b7c637adf687] ...
	I0116 03:48:21.154577  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c13ef036a1014a6bbc5c179a0889fb6ff615988713589da58240b7c637adf687"
	I0116 03:48:21.203708  507339 logs.go:123] Gathering logs for dmesg ...
	I0116 03:48:21.203760  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0116 03:48:21.219320  507339 logs.go:123] Gathering logs for etcd [01aaf51cd40b9137f2395404bb15b7372927f81dc8a4e884600dc8cba9bbeb8e] ...
	I0116 03:48:21.219362  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 01aaf51cd40b9137f2395404bb15b7372927f81dc8a4e884600dc8cba9bbeb8e"
	I0116 03:48:21.271759  507339 logs.go:123] Gathering logs for kube-proxy [eba2964f029ac61d9cfb59dc548ae05c832f9b23713f51924291fbfc5de985ed] ...
	I0116 03:48:21.271812  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eba2964f029ac61d9cfb59dc548ae05c832f9b23713f51924291fbfc5de985ed"
	I0116 03:48:21.316786  507339 logs.go:123] Gathering logs for kube-controller-manager [802d4c55aa04370ff079f09dfe4670f5dac1142ca6db880afa17be75f1d20a76] ...
	I0116 03:48:21.316825  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 802d4c55aa04370ff079f09dfe4670f5dac1142ca6db880afa17be75f1d20a76"
	I0116 03:48:21.383743  507339 logs.go:123] Gathering logs for storage-provisioner [59754e94eb3cf3fecfedb9077fbad2ecc2618c351fa9bfa3faed0d780fdd9ecb] ...
	I0116 03:48:21.383783  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59754e94eb3cf3fecfedb9077fbad2ecc2618c351fa9bfa3faed0d780fdd9ecb"
	I0116 03:48:21.422893  507339 logs.go:123] Gathering logs for kube-apiserver [de79f87bc28449077936386f603cc5df933f63455bc96cf6ff7e7c6e4bd32bc4] ...
	I0116 03:48:21.422926  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de79f87bc28449077936386f603cc5df933f63455bc96cf6ff7e7c6e4bd32bc4"
	I0116 03:48:21.473295  507339 logs.go:123] Gathering logs for kube-scheduler [33381edd7dded1b5ed158738429b4a7f1f03a6171f2af0832bb5a9864c950725] ...
	I0116 03:48:21.473332  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33381edd7dded1b5ed158738429b4a7f1f03a6171f2af0832bb5a9864c950725"
	I0116 03:48:21.527066  507339 logs.go:123] Gathering logs for container status ...
	I0116 03:48:21.527110  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0116 03:48:24.085743  507339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:48:24.105359  507339 api_server.go:72] duration metric: took 4m17.107229414s to wait for apiserver process to appear ...
	I0116 03:48:24.105395  507339 api_server.go:88] waiting for apiserver healthz status ...
	I0116 03:48:24.105450  507339 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0116 03:48:24.105567  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0116 03:48:24.154626  507339 cri.go:89] found id: "de79f87bc28449077936386f603cc5df933f63455bc96cf6ff7e7c6e4bd32bc4"
	I0116 03:48:24.154659  507339 cri.go:89] found id: ""
	I0116 03:48:24.154668  507339 logs.go:284] 1 containers: [de79f87bc28449077936386f603cc5df933f63455bc96cf6ff7e7c6e4bd32bc4]
	I0116 03:48:24.154720  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:24.159657  507339 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0116 03:48:24.159735  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0116 03:48:24.202635  507339 cri.go:89] found id: "01aaf51cd40b9137f2395404bb15b7372927f81dc8a4e884600dc8cba9bbeb8e"
	I0116 03:48:24.202663  507339 cri.go:89] found id: ""
	I0116 03:48:24.202671  507339 logs.go:284] 1 containers: [01aaf51cd40b9137f2395404bb15b7372927f81dc8a4e884600dc8cba9bbeb8e]
	I0116 03:48:24.202725  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:24.207503  507339 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0116 03:48:24.207578  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0116 03:48:24.253893  507339 cri.go:89] found id: "c13ef036a1014a6bbc5c179a0889fb6ff615988713589da58240b7c637adf687"
	I0116 03:48:24.253934  507339 cri.go:89] found id: ""
	I0116 03:48:24.253945  507339 logs.go:284] 1 containers: [c13ef036a1014a6bbc5c179a0889fb6ff615988713589da58240b7c637adf687]
	I0116 03:48:24.254016  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:24.258649  507339 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0116 03:48:24.258733  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0116 03:48:24.306636  507339 cri.go:89] found id: "33381edd7dded1b5ed158738429b4a7f1f03a6171f2af0832bb5a9864c950725"
	I0116 03:48:24.306662  507339 cri.go:89] found id: ""
	I0116 03:48:24.306670  507339 logs.go:284] 1 containers: [33381edd7dded1b5ed158738429b4a7f1f03a6171f2af0832bb5a9864c950725]
	I0116 03:48:24.306721  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:24.311270  507339 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0116 03:48:24.311357  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0116 03:48:24.354635  507339 cri.go:89] found id: "eba2964f029ac61d9cfb59dc548ae05c832f9b23713f51924291fbfc5de985ed"
	I0116 03:48:24.354671  507339 cri.go:89] found id: ""
	I0116 03:48:24.354683  507339 logs.go:284] 1 containers: [eba2964f029ac61d9cfb59dc548ae05c832f9b23713f51924291fbfc5de985ed]
	I0116 03:48:24.354756  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:24.359806  507339 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0116 03:48:24.359889  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0116 03:48:24.418188  507339 cri.go:89] found id: "802d4c55aa04370ff079f09dfe4670f5dac1142ca6db880afa17be75f1d20a76"
	I0116 03:48:24.418239  507339 cri.go:89] found id: ""
	I0116 03:48:24.418251  507339 logs.go:284] 1 containers: [802d4c55aa04370ff079f09dfe4670f5dac1142ca6db880afa17be75f1d20a76]
	I0116 03:48:24.418330  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:24.422943  507339 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0116 03:48:24.423030  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0116 03:48:24.467349  507339 cri.go:89] found id: ""
	I0116 03:48:24.467383  507339 logs.go:284] 0 containers: []
	W0116 03:48:24.467394  507339 logs.go:286] No container was found matching "kindnet"
	I0116 03:48:24.467403  507339 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0116 03:48:24.467466  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0116 03:48:24.517490  507339 cri.go:89] found id: "b7164c1b7732cf6d54642cb9ffc8d9f16696adc0e428c3fa16cf914420d73e1d"
	I0116 03:48:24.517525  507339 cri.go:89] found id: "59754e94eb3cf3fecfedb9077fbad2ecc2618c351fa9bfa3faed0d780fdd9ecb"
	I0116 03:48:24.517539  507339 cri.go:89] found id: ""
	I0116 03:48:24.517548  507339 logs.go:284] 2 containers: [b7164c1b7732cf6d54642cb9ffc8d9f16696adc0e428c3fa16cf914420d73e1d 59754e94eb3cf3fecfedb9077fbad2ecc2618c351fa9bfa3faed0d780fdd9ecb]
	I0116 03:48:24.517619  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:24.521952  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:24.526246  507339 logs.go:123] Gathering logs for kube-apiserver [de79f87bc28449077936386f603cc5df933f63455bc96cf6ff7e7c6e4bd32bc4] ...
	I0116 03:48:24.526277  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de79f87bc28449077936386f603cc5df933f63455bc96cf6ff7e7c6e4bd32bc4"
	I0116 03:48:24.583067  507339 logs.go:123] Gathering logs for kube-proxy [eba2964f029ac61d9cfb59dc548ae05c832f9b23713f51924291fbfc5de985ed] ...
	I0116 03:48:24.583108  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eba2964f029ac61d9cfb59dc548ae05c832f9b23713f51924291fbfc5de985ed"
	I0116 03:48:24.631278  507339 logs.go:123] Gathering logs for CRI-O ...
	I0116 03:48:24.631312  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0116 03:48:25.099279  507339 logs.go:123] Gathering logs for describe nodes ...
	I0116 03:48:25.099330  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0116 03:48:25.241388  507339 logs.go:123] Gathering logs for etcd [01aaf51cd40b9137f2395404bb15b7372927f81dc8a4e884600dc8cba9bbeb8e] ...
	I0116 03:48:25.241433  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 01aaf51cd40b9137f2395404bb15b7372927f81dc8a4e884600dc8cba9bbeb8e"
	I0116 03:48:25.298748  507339 logs.go:123] Gathering logs for storage-provisioner [59754e94eb3cf3fecfedb9077fbad2ecc2618c351fa9bfa3faed0d780fdd9ecb] ...
	I0116 03:48:25.298787  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59754e94eb3cf3fecfedb9077fbad2ecc2618c351fa9bfa3faed0d780fdd9ecb"
	I0116 03:48:25.338169  507339 logs.go:123] Gathering logs for kubelet ...
	I0116 03:48:25.338204  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0116 03:48:25.396275  507339 logs.go:123] Gathering logs for coredns [c13ef036a1014a6bbc5c179a0889fb6ff615988713589da58240b7c637adf687] ...
	I0116 03:48:25.396320  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c13ef036a1014a6bbc5c179a0889fb6ff615988713589da58240b7c637adf687"
	I0116 03:48:25.448028  507339 logs.go:123] Gathering logs for storage-provisioner [b7164c1b7732cf6d54642cb9ffc8d9f16696adc0e428c3fa16cf914420d73e1d] ...
	I0116 03:48:25.448087  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b7164c1b7732cf6d54642cb9ffc8d9f16696adc0e428c3fa16cf914420d73e1d"
	I0116 03:48:25.492640  507339 logs.go:123] Gathering logs for container status ...
	I0116 03:48:25.492673  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0116 03:48:25.541478  507339 logs.go:123] Gathering logs for dmesg ...
	I0116 03:48:25.541572  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0116 03:48:25.557537  507339 logs.go:123] Gathering logs for kube-scheduler [33381edd7dded1b5ed158738429b4a7f1f03a6171f2af0832bb5a9864c950725] ...
	I0116 03:48:25.557569  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33381edd7dded1b5ed158738429b4a7f1f03a6171f2af0832bb5a9864c950725"
	I0116 03:48:25.599921  507339 logs.go:123] Gathering logs for kube-controller-manager [802d4c55aa04370ff079f09dfe4670f5dac1142ca6db880afa17be75f1d20a76] ...
	I0116 03:48:25.599956  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 802d4c55aa04370ff079f09dfe4670f5dac1142ca6db880afa17be75f1d20a76"
	I0116 03:48:23.368308  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:25.368495  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:22.825098  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:24.827094  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:24.942708  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:27.441008  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:29.452010  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:28.158281  507339 api_server.go:253] Checking apiserver healthz at https://192.168.39.103:8443/healthz ...
	I0116 03:48:28.165500  507339 api_server.go:279] https://192.168.39.103:8443/healthz returned 200:
	ok
	I0116 03:48:28.166907  507339 api_server.go:141] control plane version: v1.29.0-rc.2
	I0116 03:48:28.166933  507339 api_server.go:131] duration metric: took 4.061531357s to wait for apiserver health ...
	I0116 03:48:28.166943  507339 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 03:48:28.166996  507339 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0116 03:48:28.167056  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0116 03:48:28.209247  507339 cri.go:89] found id: "de79f87bc28449077936386f603cc5df933f63455bc96cf6ff7e7c6e4bd32bc4"
	I0116 03:48:28.209282  507339 cri.go:89] found id: ""
	I0116 03:48:28.209295  507339 logs.go:284] 1 containers: [de79f87bc28449077936386f603cc5df933f63455bc96cf6ff7e7c6e4bd32bc4]
	I0116 03:48:28.209361  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:28.214044  507339 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0116 03:48:28.214126  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0116 03:48:28.263791  507339 cri.go:89] found id: "01aaf51cd40b9137f2395404bb15b7372927f81dc8a4e884600dc8cba9bbeb8e"
	I0116 03:48:28.263817  507339 cri.go:89] found id: ""
	I0116 03:48:28.263825  507339 logs.go:284] 1 containers: [01aaf51cd40b9137f2395404bb15b7372927f81dc8a4e884600dc8cba9bbeb8e]
	I0116 03:48:28.263889  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:28.268803  507339 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0116 03:48:28.268893  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0116 03:48:28.311035  507339 cri.go:89] found id: "c13ef036a1014a6bbc5c179a0889fb6ff615988713589da58240b7c637adf687"
	I0116 03:48:28.311062  507339 cri.go:89] found id: ""
	I0116 03:48:28.311070  507339 logs.go:284] 1 containers: [c13ef036a1014a6bbc5c179a0889fb6ff615988713589da58240b7c637adf687]
	I0116 03:48:28.311132  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:28.315791  507339 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0116 03:48:28.315871  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0116 03:48:28.366917  507339 cri.go:89] found id: "33381edd7dded1b5ed158738429b4a7f1f03a6171f2af0832bb5a9864c950725"
	I0116 03:48:28.366947  507339 cri.go:89] found id: ""
	I0116 03:48:28.366957  507339 logs.go:284] 1 containers: [33381edd7dded1b5ed158738429b4a7f1f03a6171f2af0832bb5a9864c950725]
	I0116 03:48:28.367028  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:28.372648  507339 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0116 03:48:28.372723  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0116 03:48:28.415530  507339 cri.go:89] found id: "eba2964f029ac61d9cfb59dc548ae05c832f9b23713f51924291fbfc5de985ed"
	I0116 03:48:28.415566  507339 cri.go:89] found id: ""
	I0116 03:48:28.415577  507339 logs.go:284] 1 containers: [eba2964f029ac61d9cfb59dc548ae05c832f9b23713f51924291fbfc5de985ed]
	I0116 03:48:28.415669  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:28.420784  507339 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0116 03:48:28.420865  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0116 03:48:28.474238  507339 cri.go:89] found id: "802d4c55aa04370ff079f09dfe4670f5dac1142ca6db880afa17be75f1d20a76"
	I0116 03:48:28.474262  507339 cri.go:89] found id: ""
	I0116 03:48:28.474270  507339 logs.go:284] 1 containers: [802d4c55aa04370ff079f09dfe4670f5dac1142ca6db880afa17be75f1d20a76]
	I0116 03:48:28.474335  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:28.479547  507339 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0116 03:48:28.479637  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0116 03:48:28.526403  507339 cri.go:89] found id: ""
	I0116 03:48:28.526436  507339 logs.go:284] 0 containers: []
	W0116 03:48:28.526455  507339 logs.go:286] No container was found matching "kindnet"
	I0116 03:48:28.526466  507339 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0116 03:48:28.526535  507339 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0116 03:48:28.572958  507339 cri.go:89] found id: "b7164c1b7732cf6d54642cb9ffc8d9f16696adc0e428c3fa16cf914420d73e1d"
	I0116 03:48:28.572988  507339 cri.go:89] found id: "59754e94eb3cf3fecfedb9077fbad2ecc2618c351fa9bfa3faed0d780fdd9ecb"
	I0116 03:48:28.572994  507339 cri.go:89] found id: ""
	I0116 03:48:28.573002  507339 logs.go:284] 2 containers: [b7164c1b7732cf6d54642cb9ffc8d9f16696adc0e428c3fa16cf914420d73e1d 59754e94eb3cf3fecfedb9077fbad2ecc2618c351fa9bfa3faed0d780fdd9ecb]
	I0116 03:48:28.573064  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:28.579388  507339 ssh_runner.go:195] Run: which crictl
	I0116 03:48:28.585318  507339 logs.go:123] Gathering logs for kube-apiserver [de79f87bc28449077936386f603cc5df933f63455bc96cf6ff7e7c6e4bd32bc4] ...
	I0116 03:48:28.585355  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de79f87bc28449077936386f603cc5df933f63455bc96cf6ff7e7c6e4bd32bc4"
	I0116 03:48:28.640376  507339 logs.go:123] Gathering logs for kube-controller-manager [802d4c55aa04370ff079f09dfe4670f5dac1142ca6db880afa17be75f1d20a76] ...
	I0116 03:48:28.640419  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 802d4c55aa04370ff079f09dfe4670f5dac1142ca6db880afa17be75f1d20a76"
	I0116 03:48:28.701292  507339 logs.go:123] Gathering logs for storage-provisioner [59754e94eb3cf3fecfedb9077fbad2ecc2618c351fa9bfa3faed0d780fdd9ecb] ...
	I0116 03:48:28.701332  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59754e94eb3cf3fecfedb9077fbad2ecc2618c351fa9bfa3faed0d780fdd9ecb"
	I0116 03:48:28.744571  507339 logs.go:123] Gathering logs for container status ...
	I0116 03:48:28.744605  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0116 03:48:28.794905  507339 logs.go:123] Gathering logs for kubelet ...
	I0116 03:48:28.794942  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0116 03:48:28.847687  507339 logs.go:123] Gathering logs for dmesg ...
	I0116 03:48:28.847736  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0116 03:48:28.861641  507339 logs.go:123] Gathering logs for describe nodes ...
	I0116 03:48:28.861690  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0116 03:48:29.036673  507339 logs.go:123] Gathering logs for kube-proxy [eba2964f029ac61d9cfb59dc548ae05c832f9b23713f51924291fbfc5de985ed] ...
	I0116 03:48:29.036709  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eba2964f029ac61d9cfb59dc548ae05c832f9b23713f51924291fbfc5de985ed"
	I0116 03:48:29.084792  507339 logs.go:123] Gathering logs for CRI-O ...
	I0116 03:48:29.084823  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0116 03:48:29.449656  507339 logs.go:123] Gathering logs for etcd [01aaf51cd40b9137f2395404bb15b7372927f81dc8a4e884600dc8cba9bbeb8e] ...
	I0116 03:48:29.449707  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 01aaf51cd40b9137f2395404bb15b7372927f81dc8a4e884600dc8cba9bbeb8e"
	I0116 03:48:29.502412  507339 logs.go:123] Gathering logs for storage-provisioner [b7164c1b7732cf6d54642cb9ffc8d9f16696adc0e428c3fa16cf914420d73e1d] ...
	I0116 03:48:29.502460  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b7164c1b7732cf6d54642cb9ffc8d9f16696adc0e428c3fa16cf914420d73e1d"
	I0116 03:48:29.546471  507339 logs.go:123] Gathering logs for coredns [c13ef036a1014a6bbc5c179a0889fb6ff615988713589da58240b7c637adf687] ...
	I0116 03:48:29.546520  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c13ef036a1014a6bbc5c179a0889fb6ff615988713589da58240b7c637adf687"
	I0116 03:48:29.594282  507339 logs.go:123] Gathering logs for kube-scheduler [33381edd7dded1b5ed158738429b4a7f1f03a6171f2af0832bb5a9864c950725] ...
	I0116 03:48:29.594329  507339 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33381edd7dded1b5ed158738429b4a7f1f03a6171f2af0832bb5a9864c950725"
	I0116 03:48:27.867485  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:29.868504  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:27.324987  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:29.325330  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:31.329373  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:32.146165  507339 system_pods.go:59] 8 kube-system pods found
	I0116 03:48:32.146209  507339 system_pods.go:61] "coredns-76f75df574-lr95b" [15dc0b11-f7ec-4729-bbfa-79b9649fbad6] Running
	I0116 03:48:32.146218  507339 system_pods.go:61] "etcd-no-preload-666547" [f98dcc57-5869-4a35-b443-2e6c9beddd88] Running
	I0116 03:48:32.146225  507339 system_pods.go:61] "kube-apiserver-no-preload-666547" [3aceae9d-224a-4e8c-a02d-ad4199d8d558] Running
	I0116 03:48:32.146232  507339 system_pods.go:61] "kube-controller-manager-no-preload-666547" [af39c89c-3b41-4d6f-b075-16de04a3ecc0] Running
	I0116 03:48:32.146238  507339 system_pods.go:61] "kube-proxy-dcmrn" [1e91c96f-cbc5-424d-a09e-06e34bf7a2e2] Running
	I0116 03:48:32.146244  507339 system_pods.go:61] "kube-scheduler-no-preload-666547" [0f2aa713-aebc-4858-a348-32169021235e] Running
	I0116 03:48:32.146253  507339 system_pods.go:61] "metrics-server-57f55c9bc5-78vfj" [dbd2d3b2-ec0f-4253-8549-7c4299522c37] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:48:32.146261  507339 system_pods.go:61] "storage-provisioner" [f4e1ba45-217d-41d5-b583-2f60044879bc] Running
	I0116 03:48:32.146272  507339 system_pods.go:74] duration metric: took 3.979321091s to wait for pod list to return data ...
	I0116 03:48:32.146286  507339 default_sa.go:34] waiting for default service account to be created ...
	I0116 03:48:32.149674  507339 default_sa.go:45] found service account: "default"
	I0116 03:48:32.149702  507339 default_sa.go:55] duration metric: took 3.408362ms for default service account to be created ...
	I0116 03:48:32.149710  507339 system_pods.go:116] waiting for k8s-apps to be running ...
	I0116 03:48:32.160459  507339 system_pods.go:86] 8 kube-system pods found
	I0116 03:48:32.160495  507339 system_pods.go:89] "coredns-76f75df574-lr95b" [15dc0b11-f7ec-4729-bbfa-79b9649fbad6] Running
	I0116 03:48:32.160503  507339 system_pods.go:89] "etcd-no-preload-666547" [f98dcc57-5869-4a35-b443-2e6c9beddd88] Running
	I0116 03:48:32.160510  507339 system_pods.go:89] "kube-apiserver-no-preload-666547" [3aceae9d-224a-4e8c-a02d-ad4199d8d558] Running
	I0116 03:48:32.160518  507339 system_pods.go:89] "kube-controller-manager-no-preload-666547" [af39c89c-3b41-4d6f-b075-16de04a3ecc0] Running
	I0116 03:48:32.160524  507339 system_pods.go:89] "kube-proxy-dcmrn" [1e91c96f-cbc5-424d-a09e-06e34bf7a2e2] Running
	I0116 03:48:32.160529  507339 system_pods.go:89] "kube-scheduler-no-preload-666547" [0f2aa713-aebc-4858-a348-32169021235e] Running
	I0116 03:48:32.160540  507339 system_pods.go:89] "metrics-server-57f55c9bc5-78vfj" [dbd2d3b2-ec0f-4253-8549-7c4299522c37] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:48:32.160548  507339 system_pods.go:89] "storage-provisioner" [f4e1ba45-217d-41d5-b583-2f60044879bc] Running
	I0116 03:48:32.160560  507339 system_pods.go:126] duration metric: took 10.843124ms to wait for k8s-apps to be running ...
	I0116 03:48:32.160569  507339 system_svc.go:44] waiting for kubelet service to be running ....
	I0116 03:48:32.160629  507339 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:48:32.179349  507339 system_svc.go:56] duration metric: took 18.767357ms WaitForService to wait for kubelet.
	I0116 03:48:32.179391  507339 kubeadm.go:581] duration metric: took 4m25.181271548s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0116 03:48:32.179426  507339 node_conditions.go:102] verifying NodePressure condition ...
	I0116 03:48:32.185135  507339 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 03:48:32.185165  507339 node_conditions.go:123] node cpu capacity is 2
	I0116 03:48:32.185198  507339 node_conditions.go:105] duration metric: took 5.766084ms to run NodePressure ...
	I0116 03:48:32.185219  507339 start.go:228] waiting for startup goroutines ...
	I0116 03:48:32.185228  507339 start.go:233] waiting for cluster config update ...
	I0116 03:48:32.185269  507339 start.go:242] writing updated cluster config ...
	I0116 03:48:32.185860  507339 ssh_runner.go:195] Run: rm -f paused
	I0116 03:48:32.243812  507339 start.go:600] kubectl: 1.29.0, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0116 03:48:32.246056  507339 out.go:177] * Done! kubectl is now configured to use "no-preload-666547" cluster and "default" namespace by default
	I0116 03:48:31.940664  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:33.941163  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:31.868778  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:34.367292  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:33.825761  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:35.829060  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:36.440459  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:38.440778  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:36.367672  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:38.867024  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:40.867193  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:38.325077  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:40.326947  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:40.440990  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:42.942197  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:43.365931  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:45.367057  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:42.826200  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:44.827292  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:45.441601  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:47.443035  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:47.367959  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:49.867083  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:47.326224  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:49.326339  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:49.940592  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:51.942424  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:54.440478  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:51.868254  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:54.368867  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:51.825317  507889 pod_ready.go:102] pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:52.325756  507889 pod_ready.go:81] duration metric: took 4m0.008011182s waiting for pod "metrics-server-57f55c9bc5-894n2" in "kube-system" namespace to be "Ready" ...
	E0116 03:48:52.325782  507889 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0116 03:48:52.325790  507889 pod_ready.go:38] duration metric: took 4m4.320002841s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:48:52.325804  507889 api_server.go:52] waiting for apiserver process to appear ...
	I0116 03:48:52.325855  507889 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0116 03:48:52.325905  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0116 03:48:52.394600  507889 cri.go:89] found id: "f9861ff0fbab72660648bfcb49b82810bada99f73a55cb0aa359ba114825b8f3"
	I0116 03:48:52.394624  507889 cri.go:89] found id: ""
	I0116 03:48:52.394632  507889 logs.go:284] 1 containers: [f9861ff0fbab72660648bfcb49b82810bada99f73a55cb0aa359ba114825b8f3]
	I0116 03:48:52.394716  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:48:52.400137  507889 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0116 03:48:52.400232  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0116 03:48:52.444453  507889 cri.go:89] found id: "e2758ac4468b10765b35629ffc54228655bc18f79a654cc03e91929ebdc3cf90"
	I0116 03:48:52.444485  507889 cri.go:89] found id: ""
	I0116 03:48:52.444495  507889 logs.go:284] 1 containers: [e2758ac4468b10765b35629ffc54228655bc18f79a654cc03e91929ebdc3cf90]
	I0116 03:48:52.444557  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:48:52.449850  507889 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0116 03:48:52.450002  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0116 03:48:52.499160  507889 cri.go:89] found id: "a07ae23e6e9e3c589264d0be7095896549a4b71fa2d0b06805a3b1b3bea16f6a"
	I0116 03:48:52.499204  507889 cri.go:89] found id: ""
	I0116 03:48:52.499216  507889 logs.go:284] 1 containers: [a07ae23e6e9e3c589264d0be7095896549a4b71fa2d0b06805a3b1b3bea16f6a]
	I0116 03:48:52.499286  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:48:52.504257  507889 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0116 03:48:52.504357  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0116 03:48:52.563747  507889 cri.go:89] found id: "e60387e0e2800d67c99eb3566f31ad675c0db6c22379fc28ee9a447af5f5023c"
	I0116 03:48:52.563782  507889 cri.go:89] found id: ""
	I0116 03:48:52.563790  507889 logs.go:284] 1 containers: [e60387e0e2800d67c99eb3566f31ad675c0db6c22379fc28ee9a447af5f5023c]
	I0116 03:48:52.563860  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:48:52.568676  507889 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0116 03:48:52.568771  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0116 03:48:52.617090  507889 cri.go:89] found id: "44f71a7069827d97c7326cbd36638ec36ff39cbbe0a1e4e61e7c010a38d2e123"
	I0116 03:48:52.617136  507889 cri.go:89] found id: ""
	I0116 03:48:52.617149  507889 logs.go:284] 1 containers: [44f71a7069827d97c7326cbd36638ec36ff39cbbe0a1e4e61e7c010a38d2e123]
	I0116 03:48:52.617222  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:48:52.622121  507889 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0116 03:48:52.622224  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0116 03:48:52.685004  507889 cri.go:89] found id: "1438a3832328a03b94c3d408a2e332c0fb0adab915a9d4f26994e84c0509fac9"
	I0116 03:48:52.685033  507889 cri.go:89] found id: ""
	I0116 03:48:52.685043  507889 logs.go:284] 1 containers: [1438a3832328a03b94c3d408a2e332c0fb0adab915a9d4f26994e84c0509fac9]
	I0116 03:48:52.685113  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:48:52.689837  507889 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0116 03:48:52.689913  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0116 03:48:52.730008  507889 cri.go:89] found id: ""
	I0116 03:48:52.730034  507889 logs.go:284] 0 containers: []
	W0116 03:48:52.730044  507889 logs.go:286] No container was found matching "kindnet"
	I0116 03:48:52.730051  507889 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0116 03:48:52.730120  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0116 03:48:52.780523  507889 cri.go:89] found id: "33ba3a03d878abc86a194defd715bb61a0066e49063e3823002bd3ec421da138"
	I0116 03:48:52.780554  507889 cri.go:89] found id: "a4b27881ef90cea325e8f306fac88a76052eec15a693c291288664b8f4ebcc2a"
	I0116 03:48:52.780562  507889 cri.go:89] found id: ""
	I0116 03:48:52.780571  507889 logs.go:284] 2 containers: [33ba3a03d878abc86a194defd715bb61a0066e49063e3823002bd3ec421da138 a4b27881ef90cea325e8f306fac88a76052eec15a693c291288664b8f4ebcc2a]
	I0116 03:48:52.780641  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:48:52.787305  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:48:52.791352  507889 logs.go:123] Gathering logs for kubelet ...
	I0116 03:48:52.791383  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0116 03:48:52.859099  507889 logs.go:123] Gathering logs for etcd [e2758ac4468b10765b35629ffc54228655bc18f79a654cc03e91929ebdc3cf90] ...
	I0116 03:48:52.859152  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2758ac4468b10765b35629ffc54228655bc18f79a654cc03e91929ebdc3cf90"
	I0116 03:48:52.912806  507889 logs.go:123] Gathering logs for coredns [a07ae23e6e9e3c589264d0be7095896549a4b71fa2d0b06805a3b1b3bea16f6a] ...
	I0116 03:48:52.912852  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a07ae23e6e9e3c589264d0be7095896549a4b71fa2d0b06805a3b1b3bea16f6a"
	I0116 03:48:52.960880  507889 logs.go:123] Gathering logs for kube-controller-manager [1438a3832328a03b94c3d408a2e332c0fb0adab915a9d4f26994e84c0509fac9] ...
	I0116 03:48:52.960919  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1438a3832328a03b94c3d408a2e332c0fb0adab915a9d4f26994e84c0509fac9"
	I0116 03:48:53.023064  507889 logs.go:123] Gathering logs for CRI-O ...
	I0116 03:48:53.023110  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0116 03:48:53.524890  507889 logs.go:123] Gathering logs for kube-apiserver [f9861ff0fbab72660648bfcb49b82810bada99f73a55cb0aa359ba114825b8f3] ...
	I0116 03:48:53.524934  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9861ff0fbab72660648bfcb49b82810bada99f73a55cb0aa359ba114825b8f3"
	I0116 03:48:53.587550  507889 logs.go:123] Gathering logs for kube-scheduler [e60387e0e2800d67c99eb3566f31ad675c0db6c22379fc28ee9a447af5f5023c] ...
	I0116 03:48:53.587594  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e60387e0e2800d67c99eb3566f31ad675c0db6c22379fc28ee9a447af5f5023c"
	I0116 03:48:53.627986  507889 logs.go:123] Gathering logs for kube-proxy [44f71a7069827d97c7326cbd36638ec36ff39cbbe0a1e4e61e7c010a38d2e123] ...
	I0116 03:48:53.628029  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44f71a7069827d97c7326cbd36638ec36ff39cbbe0a1e4e61e7c010a38d2e123"
	I0116 03:48:53.671704  507889 logs.go:123] Gathering logs for dmesg ...
	I0116 03:48:53.671739  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0116 03:48:53.686333  507889 logs.go:123] Gathering logs for describe nodes ...
	I0116 03:48:53.686370  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0116 03:48:53.855391  507889 logs.go:123] Gathering logs for storage-provisioner [33ba3a03d878abc86a194defd715bb61a0066e49063e3823002bd3ec421da138] ...
	I0116 03:48:53.855435  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33ba3a03d878abc86a194defd715bb61a0066e49063e3823002bd3ec421da138"
	I0116 03:48:53.906028  507889 logs.go:123] Gathering logs for storage-provisioner [a4b27881ef90cea325e8f306fac88a76052eec15a693c291288664b8f4ebcc2a] ...
	I0116 03:48:53.906064  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4b27881ef90cea325e8f306fac88a76052eec15a693c291288664b8f4ebcc2a"
	I0116 03:48:53.945386  507889 logs.go:123] Gathering logs for container status ...
	I0116 03:48:53.945419  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0116 03:48:56.498685  507889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:48:56.516768  507889 api_server.go:72] duration metric: took 4m13.505914609s to wait for apiserver process to appear ...
	I0116 03:48:56.516797  507889 api_server.go:88] waiting for apiserver healthz status ...
	I0116 03:48:56.516836  507889 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0116 03:48:56.516907  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0116 03:48:56.563236  507889 cri.go:89] found id: "f9861ff0fbab72660648bfcb49b82810bada99f73a55cb0aa359ba114825b8f3"
	I0116 03:48:56.563272  507889 cri.go:89] found id: ""
	I0116 03:48:56.563283  507889 logs.go:284] 1 containers: [f9861ff0fbab72660648bfcb49b82810bada99f73a55cb0aa359ba114825b8f3]
	I0116 03:48:56.563356  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:48:56.568012  507889 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0116 03:48:56.568188  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0116 03:48:56.443226  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:58.940353  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:56.868597  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:59.366906  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:48:56.613095  507889 cri.go:89] found id: "e2758ac4468b10765b35629ffc54228655bc18f79a654cc03e91929ebdc3cf90"
	I0116 03:48:56.613120  507889 cri.go:89] found id: ""
	I0116 03:48:56.613129  507889 logs.go:284] 1 containers: [e2758ac4468b10765b35629ffc54228655bc18f79a654cc03e91929ebdc3cf90]
	I0116 03:48:56.613190  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:48:56.618736  507889 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0116 03:48:56.618827  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0116 03:48:56.672773  507889 cri.go:89] found id: "a07ae23e6e9e3c589264d0be7095896549a4b71fa2d0b06805a3b1b3bea16f6a"
	I0116 03:48:56.672796  507889 cri.go:89] found id: ""
	I0116 03:48:56.672805  507889 logs.go:284] 1 containers: [a07ae23e6e9e3c589264d0be7095896549a4b71fa2d0b06805a3b1b3bea16f6a]
	I0116 03:48:56.672855  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:48:56.679218  507889 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0116 03:48:56.679293  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0116 03:48:56.724517  507889 cri.go:89] found id: "e60387e0e2800d67c99eb3566f31ad675c0db6c22379fc28ee9a447af5f5023c"
	I0116 03:48:56.724547  507889 cri.go:89] found id: ""
	I0116 03:48:56.724555  507889 logs.go:284] 1 containers: [e60387e0e2800d67c99eb3566f31ad675c0db6c22379fc28ee9a447af5f5023c]
	I0116 03:48:56.724622  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:48:56.730061  507889 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0116 03:48:56.730146  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0116 03:48:56.775380  507889 cri.go:89] found id: "44f71a7069827d97c7326cbd36638ec36ff39cbbe0a1e4e61e7c010a38d2e123"
	I0116 03:48:56.775413  507889 cri.go:89] found id: ""
	I0116 03:48:56.775423  507889 logs.go:284] 1 containers: [44f71a7069827d97c7326cbd36638ec36ff39cbbe0a1e4e61e7c010a38d2e123]
	I0116 03:48:56.775494  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:48:56.781085  507889 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0116 03:48:56.781183  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0116 03:48:56.830030  507889 cri.go:89] found id: "1438a3832328a03b94c3d408a2e332c0fb0adab915a9d4f26994e84c0509fac9"
	I0116 03:48:56.830067  507889 cri.go:89] found id: ""
	I0116 03:48:56.830076  507889 logs.go:284] 1 containers: [1438a3832328a03b94c3d408a2e332c0fb0adab915a9d4f26994e84c0509fac9]
	I0116 03:48:56.830163  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:48:56.834956  507889 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0116 03:48:56.835035  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0116 03:48:56.882972  507889 cri.go:89] found id: ""
	I0116 03:48:56.883001  507889 logs.go:284] 0 containers: []
	W0116 03:48:56.883013  507889 logs.go:286] No container was found matching "kindnet"
	I0116 03:48:56.883022  507889 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0116 03:48:56.883095  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0116 03:48:56.925520  507889 cri.go:89] found id: "33ba3a03d878abc86a194defd715bb61a0066e49063e3823002bd3ec421da138"
	I0116 03:48:56.925553  507889 cri.go:89] found id: "a4b27881ef90cea325e8f306fac88a76052eec15a693c291288664b8f4ebcc2a"
	I0116 03:48:56.925560  507889 cri.go:89] found id: ""
	I0116 03:48:56.925574  507889 logs.go:284] 2 containers: [33ba3a03d878abc86a194defd715bb61a0066e49063e3823002bd3ec421da138 a4b27881ef90cea325e8f306fac88a76052eec15a693c291288664b8f4ebcc2a]
	I0116 03:48:56.925656  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:48:56.931331  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:48:56.936492  507889 logs.go:123] Gathering logs for storage-provisioner [a4b27881ef90cea325e8f306fac88a76052eec15a693c291288664b8f4ebcc2a] ...
	I0116 03:48:56.936527  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4b27881ef90cea325e8f306fac88a76052eec15a693c291288664b8f4ebcc2a"
	I0116 03:48:56.981819  507889 logs.go:123] Gathering logs for kubelet ...
	I0116 03:48:56.981851  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0116 03:48:57.045678  507889 logs.go:123] Gathering logs for dmesg ...
	I0116 03:48:57.045723  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0116 03:48:57.060832  507889 logs.go:123] Gathering logs for etcd [e2758ac4468b10765b35629ffc54228655bc18f79a654cc03e91929ebdc3cf90] ...
	I0116 03:48:57.060872  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2758ac4468b10765b35629ffc54228655bc18f79a654cc03e91929ebdc3cf90"
	I0116 03:48:57.123644  507889 logs.go:123] Gathering logs for kube-scheduler [e60387e0e2800d67c99eb3566f31ad675c0db6c22379fc28ee9a447af5f5023c] ...
	I0116 03:48:57.123695  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e60387e0e2800d67c99eb3566f31ad675c0db6c22379fc28ee9a447af5f5023c"
	I0116 03:48:57.170173  507889 logs.go:123] Gathering logs for kube-proxy [44f71a7069827d97c7326cbd36638ec36ff39cbbe0a1e4e61e7c010a38d2e123] ...
	I0116 03:48:57.170216  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44f71a7069827d97c7326cbd36638ec36ff39cbbe0a1e4e61e7c010a38d2e123"
	I0116 03:48:57.215434  507889 logs.go:123] Gathering logs for describe nodes ...
	I0116 03:48:57.215470  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0116 03:48:57.370036  507889 logs.go:123] Gathering logs for kube-controller-manager [1438a3832328a03b94c3d408a2e332c0fb0adab915a9d4f26994e84c0509fac9] ...
	I0116 03:48:57.370081  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1438a3832328a03b94c3d408a2e332c0fb0adab915a9d4f26994e84c0509fac9"
	I0116 03:48:57.432988  507889 logs.go:123] Gathering logs for container status ...
	I0116 03:48:57.433048  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0116 03:48:57.485239  507889 logs.go:123] Gathering logs for kube-apiserver [f9861ff0fbab72660648bfcb49b82810bada99f73a55cb0aa359ba114825b8f3] ...
	I0116 03:48:57.485284  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9861ff0fbab72660648bfcb49b82810bada99f73a55cb0aa359ba114825b8f3"
	I0116 03:48:57.547192  507889 logs.go:123] Gathering logs for coredns [a07ae23e6e9e3c589264d0be7095896549a4b71fa2d0b06805a3b1b3bea16f6a] ...
	I0116 03:48:57.547237  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a07ae23e6e9e3c589264d0be7095896549a4b71fa2d0b06805a3b1b3bea16f6a"
	I0116 03:48:57.598025  507889 logs.go:123] Gathering logs for storage-provisioner [33ba3a03d878abc86a194defd715bb61a0066e49063e3823002bd3ec421da138] ...
	I0116 03:48:57.598085  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33ba3a03d878abc86a194defd715bb61a0066e49063e3823002bd3ec421da138"
	I0116 03:48:57.644234  507889 logs.go:123] Gathering logs for CRI-O ...
	I0116 03:48:57.644271  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0116 03:49:00.562219  507889 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8444/healthz ...
	I0116 03:49:00.568196  507889 api_server.go:279] https://192.168.50.236:8444/healthz returned 200:
	ok
	I0116 03:49:00.571612  507889 api_server.go:141] control plane version: v1.28.4
	I0116 03:49:00.571655  507889 api_server.go:131] duration metric: took 4.0548511s to wait for apiserver health ...
	I0116 03:49:00.571668  507889 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 03:49:00.571701  507889 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0116 03:49:00.571774  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0116 03:49:00.623308  507889 cri.go:89] found id: "f9861ff0fbab72660648bfcb49b82810bada99f73a55cb0aa359ba114825b8f3"
	I0116 03:49:00.623344  507889 cri.go:89] found id: ""
	I0116 03:49:00.623355  507889 logs.go:284] 1 containers: [f9861ff0fbab72660648bfcb49b82810bada99f73a55cb0aa359ba114825b8f3]
	I0116 03:49:00.623418  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:49:00.630287  507889 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0116 03:49:00.630381  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0116 03:49:00.673225  507889 cri.go:89] found id: "e2758ac4468b10765b35629ffc54228655bc18f79a654cc03e91929ebdc3cf90"
	I0116 03:49:00.673265  507889 cri.go:89] found id: ""
	I0116 03:49:00.673276  507889 logs.go:284] 1 containers: [e2758ac4468b10765b35629ffc54228655bc18f79a654cc03e91929ebdc3cf90]
	I0116 03:49:00.673334  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:49:00.678677  507889 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0116 03:49:00.678768  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0116 03:49:00.723055  507889 cri.go:89] found id: "a07ae23e6e9e3c589264d0be7095896549a4b71fa2d0b06805a3b1b3bea16f6a"
	I0116 03:49:00.723081  507889 cri.go:89] found id: ""
	I0116 03:49:00.723089  507889 logs.go:284] 1 containers: [a07ae23e6e9e3c589264d0be7095896549a4b71fa2d0b06805a3b1b3bea16f6a]
	I0116 03:49:00.723148  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:49:00.727931  507889 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0116 03:49:00.728053  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0116 03:49:00.777602  507889 cri.go:89] found id: "e60387e0e2800d67c99eb3566f31ad675c0db6c22379fc28ee9a447af5f5023c"
	I0116 03:49:00.777639  507889 cri.go:89] found id: ""
	I0116 03:49:00.777651  507889 logs.go:284] 1 containers: [e60387e0e2800d67c99eb3566f31ad675c0db6c22379fc28ee9a447af5f5023c]
	I0116 03:49:00.777723  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:49:00.787121  507889 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0116 03:49:00.787206  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0116 03:49:00.835268  507889 cri.go:89] found id: "44f71a7069827d97c7326cbd36638ec36ff39cbbe0a1e4e61e7c010a38d2e123"
	I0116 03:49:00.835300  507889 cri.go:89] found id: ""
	I0116 03:49:00.835310  507889 logs.go:284] 1 containers: [44f71a7069827d97c7326cbd36638ec36ff39cbbe0a1e4e61e7c010a38d2e123]
	I0116 03:49:00.835378  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:49:00.842204  507889 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0116 03:49:00.842299  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0116 03:49:00.889511  507889 cri.go:89] found id: "1438a3832328a03b94c3d408a2e332c0fb0adab915a9d4f26994e84c0509fac9"
	I0116 03:49:00.889541  507889 cri.go:89] found id: ""
	I0116 03:49:00.889551  507889 logs.go:284] 1 containers: [1438a3832328a03b94c3d408a2e332c0fb0adab915a9d4f26994e84c0509fac9]
	I0116 03:49:00.889620  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:49:00.894964  507889 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0116 03:49:00.895059  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0116 03:49:00.937187  507889 cri.go:89] found id: ""
	I0116 03:49:00.937221  507889 logs.go:284] 0 containers: []
	W0116 03:49:00.937237  507889 logs.go:286] No container was found matching "kindnet"
	I0116 03:49:00.937246  507889 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0116 03:49:00.937313  507889 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0116 03:49:00.977711  507889 cri.go:89] found id: "33ba3a03d878abc86a194defd715bb61a0066e49063e3823002bd3ec421da138"
	I0116 03:49:00.977740  507889 cri.go:89] found id: "a4b27881ef90cea325e8f306fac88a76052eec15a693c291288664b8f4ebcc2a"
	I0116 03:49:00.977748  507889 cri.go:89] found id: ""
	I0116 03:49:00.977756  507889 logs.go:284] 2 containers: [33ba3a03d878abc86a194defd715bb61a0066e49063e3823002bd3ec421da138 a4b27881ef90cea325e8f306fac88a76052eec15a693c291288664b8f4ebcc2a]
	I0116 03:49:00.977834  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:49:00.982886  507889 ssh_runner.go:195] Run: which crictl
	I0116 03:49:00.988008  507889 logs.go:123] Gathering logs for describe nodes ...
	I0116 03:49:00.988061  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0116 03:49:01.115755  507889 logs.go:123] Gathering logs for dmesg ...
	I0116 03:49:01.115791  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0116 03:49:01.131706  507889 logs.go:123] Gathering logs for etcd [e2758ac4468b10765b35629ffc54228655bc18f79a654cc03e91929ebdc3cf90] ...
	I0116 03:49:01.131748  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2758ac4468b10765b35629ffc54228655bc18f79a654cc03e91929ebdc3cf90"
	I0116 03:49:01.186279  507889 logs.go:123] Gathering logs for kube-scheduler [e60387e0e2800d67c99eb3566f31ad675c0db6c22379fc28ee9a447af5f5023c] ...
	I0116 03:49:01.186324  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e60387e0e2800d67c99eb3566f31ad675c0db6c22379fc28ee9a447af5f5023c"
	I0116 03:49:01.231057  507889 logs.go:123] Gathering logs for kube-controller-manager [1438a3832328a03b94c3d408a2e332c0fb0adab915a9d4f26994e84c0509fac9] ...
	I0116 03:49:01.231100  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1438a3832328a03b94c3d408a2e332c0fb0adab915a9d4f26994e84c0509fac9"
	I0116 03:49:01.307541  507889 logs.go:123] Gathering logs for storage-provisioner [33ba3a03d878abc86a194defd715bb61a0066e49063e3823002bd3ec421da138] ...
	I0116 03:49:01.307586  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33ba3a03d878abc86a194defd715bb61a0066e49063e3823002bd3ec421da138"
	I0116 03:49:01.356517  507889 logs.go:123] Gathering logs for storage-provisioner [a4b27881ef90cea325e8f306fac88a76052eec15a693c291288664b8f4ebcc2a] ...
	I0116 03:49:01.356563  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4b27881ef90cea325e8f306fac88a76052eec15a693c291288664b8f4ebcc2a"
	I0116 03:49:01.409790  507889 logs.go:123] Gathering logs for kube-apiserver [f9861ff0fbab72660648bfcb49b82810bada99f73a55cb0aa359ba114825b8f3] ...
	I0116 03:49:01.409846  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9861ff0fbab72660648bfcb49b82810bada99f73a55cb0aa359ba114825b8f3"
	I0116 03:49:01.462029  507889 logs.go:123] Gathering logs for CRI-O ...
	I0116 03:49:01.462077  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0116 03:49:00.942100  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:49:02.942316  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:49:01.838933  507889 logs.go:123] Gathering logs for coredns [a07ae23e6e9e3c589264d0be7095896549a4b71fa2d0b06805a3b1b3bea16f6a] ...
	I0116 03:49:01.838999  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a07ae23e6e9e3c589264d0be7095896549a4b71fa2d0b06805a3b1b3bea16f6a"
	I0116 03:49:01.884022  507889 logs.go:123] Gathering logs for kube-proxy [44f71a7069827d97c7326cbd36638ec36ff39cbbe0a1e4e61e7c010a38d2e123] ...
	I0116 03:49:01.884075  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44f71a7069827d97c7326cbd36638ec36ff39cbbe0a1e4e61e7c010a38d2e123"
	I0116 03:49:01.930032  507889 logs.go:123] Gathering logs for container status ...
	I0116 03:49:01.930090  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0116 03:49:01.998827  507889 logs.go:123] Gathering logs for kubelet ...
	I0116 03:49:01.998863  507889 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0116 03:49:04.573529  507889 system_pods.go:59] 8 kube-system pods found
	I0116 03:49:04.573571  507889 system_pods.go:61] "coredns-5dd5756b68-pmx8n" [9d3c9941-8938-4d4a-b6d8-9b542ca6f1ca] Running
	I0116 03:49:04.573579  507889 system_pods.go:61] "etcd-default-k8s-diff-port-434445" [a2327159-d034-4f62-a8f5-14062a41c75d] Running
	I0116 03:49:04.573587  507889 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-434445" [75c5fc5e-86b1-42cf-bb5c-8a1f8b4e13db] Running
	I0116 03:49:04.573594  507889 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-434445" [2568dd4f-124d-4c4f-8651-3b89b2b54983] Running
	I0116 03:49:04.573600  507889 system_pods.go:61] "kube-proxy-dcbqg" [eba1f9bf-6aa7-40cd-b57c-745c2d0cc414] Running
	I0116 03:49:04.573607  507889 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-434445" [479952a1-8c3d-49b4-bd22-fef988e6b83d] Running
	I0116 03:49:04.573617  507889 system_pods.go:61] "metrics-server-57f55c9bc5-894n2" [46e4892a-d026-4a9d-88bc-128e92848808] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:49:04.573626  507889 system_pods.go:61] "storage-provisioner" [16fd4585-3d75-40c3-a28d-4134375f4e3d] Running
	I0116 03:49:04.573638  507889 system_pods.go:74] duration metric: took 4.001961367s to wait for pod list to return data ...
	I0116 03:49:04.573657  507889 default_sa.go:34] waiting for default service account to be created ...
	I0116 03:49:04.577012  507889 default_sa.go:45] found service account: "default"
	I0116 03:49:04.577041  507889 default_sa.go:55] duration metric: took 3.376395ms for default service account to be created ...
	I0116 03:49:04.577051  507889 system_pods.go:116] waiting for k8s-apps to be running ...
	I0116 03:49:04.583833  507889 system_pods.go:86] 8 kube-system pods found
	I0116 03:49:04.583880  507889 system_pods.go:89] "coredns-5dd5756b68-pmx8n" [9d3c9941-8938-4d4a-b6d8-9b542ca6f1ca] Running
	I0116 03:49:04.583890  507889 system_pods.go:89] "etcd-default-k8s-diff-port-434445" [a2327159-d034-4f62-a8f5-14062a41c75d] Running
	I0116 03:49:04.583898  507889 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-434445" [75c5fc5e-86b1-42cf-bb5c-8a1f8b4e13db] Running
	I0116 03:49:04.583905  507889 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-434445" [2568dd4f-124d-4c4f-8651-3b89b2b54983] Running
	I0116 03:49:04.583911  507889 system_pods.go:89] "kube-proxy-dcbqg" [eba1f9bf-6aa7-40cd-b57c-745c2d0cc414] Running
	I0116 03:49:04.583918  507889 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-434445" [479952a1-8c3d-49b4-bd22-fef988e6b83d] Running
	I0116 03:49:04.583928  507889 system_pods.go:89] "metrics-server-57f55c9bc5-894n2" [46e4892a-d026-4a9d-88bc-128e92848808] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:49:04.583936  507889 system_pods.go:89] "storage-provisioner" [16fd4585-3d75-40c3-a28d-4134375f4e3d] Running
	I0116 03:49:04.583950  507889 system_pods.go:126] duration metric: took 6.89136ms to wait for k8s-apps to be running ...
	I0116 03:49:04.583964  507889 system_svc.go:44] waiting for kubelet service to be running ....
	I0116 03:49:04.584016  507889 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:49:04.600209  507889 system_svc.go:56] duration metric: took 16.229333ms WaitForService to wait for kubelet.
	I0116 03:49:04.600252  507889 kubeadm.go:581] duration metric: took 4m21.589410808s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0116 03:49:04.600285  507889 node_conditions.go:102] verifying NodePressure condition ...
	I0116 03:49:04.603774  507889 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 03:49:04.603803  507889 node_conditions.go:123] node cpu capacity is 2
	I0116 03:49:04.603815  507889 node_conditions.go:105] duration metric: took 3.52526ms to run NodePressure ...
	I0116 03:49:04.603829  507889 start.go:228] waiting for startup goroutines ...
	I0116 03:49:04.603836  507889 start.go:233] waiting for cluster config update ...
	I0116 03:49:04.603849  507889 start.go:242] writing updated cluster config ...
	I0116 03:49:04.604185  507889 ssh_runner.go:195] Run: rm -f paused
	I0116 03:49:04.658922  507889 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0116 03:49:04.661265  507889 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-434445" cluster and "default" namespace by default
	I0116 03:49:01.367935  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:49:03.867391  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:49:05.867519  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:49:05.440602  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:49:07.441041  507510 pod_ready.go:102] pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace has status "Ready":"False"
	I0116 03:49:08.434235  507510 pod_ready.go:81] duration metric: took 4m0.001038173s waiting for pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace to be "Ready" ...
	E0116 03:49:08.434278  507510 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-74d5856cc6-4x5l7" in "kube-system" namespace to be "Ready" (will not retry!)
	I0116 03:49:08.434304  507510 pod_ready.go:38] duration metric: took 4m1.20014772s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:49:08.434338  507510 kubeadm.go:640] restartCluster took 5m11.767236835s
	W0116 03:49:08.434423  507510 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0116 03:49:08.434463  507510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0116 03:49:07.868307  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:49:10.367347  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:49:15.339252  507510 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (6.904753674s)
	I0116 03:49:15.339341  507510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:49:15.355684  507510 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 03:49:15.371377  507510 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 03:49:15.393609  507510 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 03:49:15.393674  507510 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0116 03:49:15.478382  507510 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0116 03:49:15.478464  507510 kubeadm.go:322] [preflight] Running pre-flight checks
	I0116 03:49:15.663487  507510 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0116 03:49:15.663663  507510 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0116 03:49:15.663803  507510 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0116 03:49:15.940677  507510 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0116 03:49:15.940857  507510 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0116 03:49:15.949553  507510 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0116 03:49:16.075111  507510 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0116 03:49:12.867512  507257 pod_ready.go:102] pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace has status "Ready":"False"
	I0116 03:49:13.859320  507257 pod_ready.go:81] duration metric: took 4m0.000451049s waiting for pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace to be "Ready" ...
	E0116 03:49:13.859353  507257 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-48gnw" in "kube-system" namespace to be "Ready" (will not retry!)
	I0116 03:49:13.859375  507257 pod_ready.go:38] duration metric: took 4m12.063407854s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:49:13.859418  507257 kubeadm.go:640] restartCluster took 4m32.047022773s
	W0116 03:49:13.859484  507257 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0116 03:49:13.859513  507257 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0116 03:49:16.077099  507510 out.go:204]   - Generating certificates and keys ...
	I0116 03:49:16.077224  507510 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0116 03:49:16.077305  507510 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0116 03:49:16.077410  507510 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0116 03:49:16.077504  507510 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0116 03:49:16.077617  507510 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0116 03:49:16.077745  507510 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0116 03:49:16.078085  507510 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0116 03:49:16.078639  507510 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0116 03:49:16.079112  507510 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0116 03:49:16.079719  507510 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0116 03:49:16.079935  507510 kubeadm.go:322] [certs] Using the existing "sa" key
	I0116 03:49:16.080015  507510 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0116 03:49:16.246902  507510 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0116 03:49:16.332722  507510 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0116 03:49:16.534277  507510 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0116 03:49:16.908642  507510 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0116 03:49:16.909711  507510 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0116 03:49:16.911960  507510 out.go:204]   - Booting up control plane ...
	I0116 03:49:16.912103  507510 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0116 03:49:16.923200  507510 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0116 03:49:16.924797  507510 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0116 03:49:16.926738  507510 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0116 03:49:16.937544  507510 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0116 03:49:27.943253  507510 kubeadm.go:322] [apiclient] All control plane components are healthy after 11.005405 seconds
	I0116 03:49:27.943474  507510 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0116 03:49:27.970644  507510 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I0116 03:49:28.500660  507510 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0116 03:49:28.500847  507510 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-696770 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0116 03:49:29.015036  507510 kubeadm.go:322] [bootstrap-token] Using token: nr2yh0.22ni19zxk2s7hw9l
	I0116 03:49:28.504409  507257 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.644866985s)
	I0116 03:49:28.504498  507257 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:49:28.519788  507257 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 03:49:28.531667  507257 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 03:49:28.543058  507257 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 03:49:28.543113  507257 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0116 03:49:28.603369  507257 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0116 03:49:28.603521  507257 kubeadm.go:322] [preflight] Running pre-flight checks
	I0116 03:49:28.784258  507257 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0116 03:49:28.784384  507257 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0116 03:49:28.784491  507257 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0116 03:49:29.068390  507257 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0116 03:49:29.017077  507510 out.go:204]   - Configuring RBAC rules ...
	I0116 03:49:29.017276  507510 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0116 03:49:29.044200  507510 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0116 03:49:29.049807  507510 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0116 03:49:29.054441  507510 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0116 03:49:29.057939  507510 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0116 03:49:29.142810  507510 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0116 03:49:29.439580  507510 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0116 03:49:29.441665  507510 kubeadm.go:322] 
	I0116 03:49:29.441736  507510 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0116 03:49:29.441741  507510 kubeadm.go:322] 
	I0116 03:49:29.441863  507510 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0116 03:49:29.441898  507510 kubeadm.go:322] 
	I0116 03:49:29.441932  507510 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0116 03:49:29.441999  507510 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0116 03:49:29.442057  507510 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0116 03:49:29.442099  507510 kubeadm.go:322] 
	I0116 03:49:29.442200  507510 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0116 03:49:29.442306  507510 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0116 03:49:29.442414  507510 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0116 03:49:29.442429  507510 kubeadm.go:322] 
	I0116 03:49:29.442566  507510 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I0116 03:49:29.442689  507510 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0116 03:49:29.442701  507510 kubeadm.go:322] 
	I0116 03:49:29.442813  507510 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token nr2yh0.22ni19zxk2s7hw9l \
	I0116 03:49:29.442967  507510 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:ab75761e1119a1709a216b682cd7b1dcce0641115df5f236b54b16e4f66aa044 \
	I0116 03:49:29.443008  507510 kubeadm.go:322]     --control-plane 	  
	I0116 03:49:29.443024  507510 kubeadm.go:322] 
	I0116 03:49:29.443147  507510 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0116 03:49:29.443159  507510 kubeadm.go:322] 
	I0116 03:49:29.443285  507510 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token nr2yh0.22ni19zxk2s7hw9l \
	I0116 03:49:29.443414  507510 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:ab75761e1119a1709a216b682cd7b1dcce0641115df5f236b54b16e4f66aa044 
	I0116 03:49:29.444142  507510 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0116 03:49:29.444278  507510 cni.go:84] Creating CNI manager for ""
	I0116 03:49:29.444302  507510 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:49:29.446569  507510 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 03:49:29.447957  507510 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 03:49:29.457418  507510 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 03:49:29.478015  507510 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0116 03:49:29.478130  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:29.478135  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=6e8fa5f64d0e7272be43ff25ed3826261f0a2578 minikube.k8s.io/name=old-k8s-version-696770 minikube.k8s.io/updated_at=2024_01_16T03_49_29_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:29.070681  507257 out.go:204]   - Generating certificates and keys ...
	I0116 03:49:29.070805  507257 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0116 03:49:29.070882  507257 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0116 03:49:29.071007  507257 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0116 03:49:29.071108  507257 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0116 03:49:29.071243  507257 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0116 03:49:29.071320  507257 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0116 03:49:29.071422  507257 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0116 03:49:29.071497  507257 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0116 03:49:29.071928  507257 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0116 03:49:29.074454  507257 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0116 03:49:29.076202  507257 kubeadm.go:322] [certs] Using the existing "sa" key
	I0116 03:49:29.076435  507257 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0116 03:49:29.360527  507257 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0116 03:49:29.779361  507257 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0116 03:49:29.976749  507257 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0116 03:49:30.075605  507257 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0116 03:49:30.076375  507257 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0116 03:49:30.079235  507257 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0116 03:49:30.081497  507257 out.go:204]   - Booting up control plane ...
	I0116 03:49:30.081645  507257 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0116 03:49:30.082340  507257 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0116 03:49:30.083349  507257 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0116 03:49:30.103660  507257 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0116 03:49:30.104863  507257 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0116 03:49:30.104924  507257 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0116 03:49:30.229980  507257 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0116 03:49:29.724417  507510 ops.go:34] apiserver oom_adj: -16
	I0116 03:49:29.724549  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:30.224988  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:30.725451  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:31.225287  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:31.724689  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:32.224984  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:32.724769  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:33.225547  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:33.724874  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:34.225301  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:34.725134  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:35.224977  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:35.724998  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:36.225495  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:36.725043  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:37.224700  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:37.725397  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:38.225311  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:38.725308  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:39.224885  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:38.732431  507257 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.502537 seconds
	I0116 03:49:38.732591  507257 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0116 03:49:38.766319  507257 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0116 03:49:39.312926  507257 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0116 03:49:39.313225  507257 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-615980 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0116 03:49:39.836927  507257 kubeadm.go:322] [bootstrap-token] Using token: 8bzdm1.4lwyoxck7xjn6vqr
	I0116 03:49:39.838931  507257 out.go:204]   - Configuring RBAC rules ...
	I0116 03:49:39.839093  507257 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0116 03:49:39.850909  507257 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0116 03:49:39.873417  507257 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0116 03:49:39.879093  507257 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0116 03:49:39.883914  507257 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0116 03:49:39.889130  507257 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0116 03:49:39.910444  507257 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0116 03:49:40.235572  507257 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0116 03:49:40.334951  507257 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0116 03:49:40.335000  507257 kubeadm.go:322] 
	I0116 03:49:40.335092  507257 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0116 03:49:40.335103  507257 kubeadm.go:322] 
	I0116 03:49:40.335212  507257 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0116 03:49:40.335222  507257 kubeadm.go:322] 
	I0116 03:49:40.335266  507257 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0116 03:49:40.335353  507257 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0116 03:49:40.335421  507257 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0116 03:49:40.335430  507257 kubeadm.go:322] 
	I0116 03:49:40.335504  507257 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0116 03:49:40.335513  507257 kubeadm.go:322] 
	I0116 03:49:40.335598  507257 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0116 03:49:40.335618  507257 kubeadm.go:322] 
	I0116 03:49:40.335690  507257 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0116 03:49:40.335793  507257 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0116 03:49:40.335891  507257 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0116 03:49:40.335904  507257 kubeadm.go:322] 
	I0116 03:49:40.336008  507257 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0116 03:49:40.336128  507257 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0116 03:49:40.336143  507257 kubeadm.go:322] 
	I0116 03:49:40.336262  507257 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 8bzdm1.4lwyoxck7xjn6vqr \
	I0116 03:49:40.336427  507257 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ab75761e1119a1709a216b682cd7b1dcce0641115df5f236b54b16e4f66aa044 \
	I0116 03:49:40.336456  507257 kubeadm.go:322] 	--control-plane 
	I0116 03:49:40.336463  507257 kubeadm.go:322] 
	I0116 03:49:40.336594  507257 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0116 03:49:40.336611  507257 kubeadm.go:322] 
	I0116 03:49:40.336744  507257 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 8bzdm1.4lwyoxck7xjn6vqr \
	I0116 03:49:40.336876  507257 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ab75761e1119a1709a216b682cd7b1dcce0641115df5f236b54b16e4f66aa044 
	I0116 03:49:40.337377  507257 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0116 03:49:40.337421  507257 cni.go:84] Creating CNI manager for ""
	I0116 03:49:40.337432  507257 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 03:49:40.340415  507257 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 03:49:40.341952  507257 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 03:49:40.376620  507257 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 03:49:40.459091  507257 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0116 03:49:40.459177  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:40.459233  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=6e8fa5f64d0e7272be43ff25ed3826261f0a2578 minikube.k8s.io/name=embed-certs-615980 minikube.k8s.io/updated_at=2024_01_16T03_49_40_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:40.524693  507257 ops.go:34] apiserver oom_adj: -16
	I0116 03:49:40.917890  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:39.725272  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:40.225380  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:40.725272  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:41.225258  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:41.725525  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:42.225270  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:42.725463  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:43.224674  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:43.724904  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:44.224946  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:44.725197  507510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:44.843354  507510 kubeadm.go:1088] duration metric: took 15.365308355s to wait for elevateKubeSystemPrivileges.
	I0116 03:49:44.843465  507510 kubeadm.go:406] StartCluster complete in 5m48.250275121s
	I0116 03:49:44.843545  507510 settings.go:142] acquiring lock: {Name:mk3ded99c06d285b174e76622bb0741c8dd2d8fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:49:44.843708  507510 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17965-468241/kubeconfig
	I0116 03:49:44.846444  507510 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-468241/kubeconfig: {Name:mkac3c63bbcba9b4fa1bea3480797757d703fb9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:49:44.846814  507510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0116 03:49:44.846959  507510 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0116 03:49:44.847043  507510 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-696770"
	I0116 03:49:44.847067  507510 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-696770"
	I0116 03:49:44.847065  507510 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-696770"
	W0116 03:49:44.847076  507510 addons.go:243] addon storage-provisioner should already be in state true
	I0116 03:49:44.847079  507510 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-696770"
	I0116 03:49:44.847099  507510 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-696770"
	I0116 03:49:44.847108  507510 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-696770"
	W0116 03:49:44.847130  507510 addons.go:243] addon metrics-server should already be in state true
	I0116 03:49:44.847152  507510 host.go:66] Checking if "old-k8s-version-696770" exists ...
	I0116 03:49:44.847087  507510 config.go:182] Loaded profile config "old-k8s-version-696770": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0116 03:49:44.847178  507510 host.go:66] Checking if "old-k8s-version-696770" exists ...
	I0116 03:49:44.847548  507510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:49:44.847568  507510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:49:44.847579  507510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:49:44.847594  507510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:49:44.847605  507510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:49:44.847632  507510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:49:44.865585  507510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36045
	I0116 03:49:44.865597  507510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45289
	I0116 03:49:44.865592  507510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39579
	I0116 03:49:44.866119  507510 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:49:44.866200  507510 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:49:44.866352  507510 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:49:44.867018  507510 main.go:141] libmachine: Using API Version  1
	I0116 03:49:44.867040  507510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:49:44.867043  507510 main.go:141] libmachine: Using API Version  1
	I0116 03:49:44.867051  507510 main.go:141] libmachine: Using API Version  1
	I0116 03:49:44.867071  507510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:49:44.867091  507510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:49:44.867481  507510 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:49:44.867557  507510 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:49:44.867711  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetState
	I0116 03:49:44.867929  507510 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:49:44.868184  507510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:49:44.868215  507510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:49:44.868486  507510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:49:44.868519  507510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:49:44.872747  507510 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-696770"
	W0116 03:49:44.872781  507510 addons.go:243] addon default-storageclass should already be in state true
	I0116 03:49:44.872816  507510 host.go:66] Checking if "old-k8s-version-696770" exists ...
	I0116 03:49:44.873264  507510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:49:44.873308  507510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:49:44.888049  507510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45943
	I0116 03:49:44.890481  507510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37259
	I0116 03:49:44.890990  507510 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:49:44.891285  507510 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:49:44.891567  507510 main.go:141] libmachine: Using API Version  1
	I0116 03:49:44.891582  507510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:49:44.891846  507510 main.go:141] libmachine: Using API Version  1
	I0116 03:49:44.891865  507510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:49:44.892307  507510 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:49:44.892510  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetState
	I0116 03:49:44.892575  507510 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:49:44.892760  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetState
	I0116 03:49:44.894812  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .DriverName
	I0116 03:49:44.895060  507510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37701
	I0116 03:49:44.896571  507510 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0116 03:49:44.895272  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .DriverName
	I0116 03:49:44.895678  507510 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:49:44.898051  507510 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0116 03:49:44.898074  507510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0116 03:49:44.899552  507510 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:49:44.897299  507510 main.go:141] libmachine: Using API Version  1
	I0116 03:49:44.898096  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHHostname
	I0116 03:49:44.901091  507510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:49:44.901216  507510 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 03:49:44.901234  507510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0116 03:49:44.901256  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHHostname
	I0116 03:49:44.902226  507510 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:49:44.902866  507510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:49:44.902908  507510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:49:44.905915  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:49:44.906022  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:49:44.906456  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:20:1a", ip: ""} in network mk-old-k8s-version-696770: {Iface:virbr3 ExpiryTime:2024-01-16 04:43:38 +0000 UTC Type:0 Mac:52:54:00:37:20:1a Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:old-k8s-version-696770 Clientid:01:52:54:00:37:20:1a}
	I0116 03:49:44.906482  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined IP address 192.168.61.167 and MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:49:44.906775  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHPort
	I0116 03:49:44.906851  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:20:1a", ip: ""} in network mk-old-k8s-version-696770: {Iface:virbr3 ExpiryTime:2024-01-16 04:43:38 +0000 UTC Type:0 Mac:52:54:00:37:20:1a Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:old-k8s-version-696770 Clientid:01:52:54:00:37:20:1a}
	I0116 03:49:44.906892  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHPort
	I0116 03:49:44.906941  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined IP address 192.168.61.167 and MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:49:44.907116  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHKeyPath
	I0116 03:49:44.907254  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHKeyPath
	I0116 03:49:44.907324  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHUsername
	I0116 03:49:44.907416  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHUsername
	I0116 03:49:44.907471  507510 sshutil.go:53] new ssh client: &{IP:192.168.61.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/old-k8s-version-696770/id_rsa Username:docker}
	I0116 03:49:44.908078  507510 sshutil.go:53] new ssh client: &{IP:192.168.61.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/old-k8s-version-696770/id_rsa Username:docker}
	I0116 03:49:44.925689  507510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38987
	I0116 03:49:44.926190  507510 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:49:44.926847  507510 main.go:141] libmachine: Using API Version  1
	I0116 03:49:44.926870  507510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:49:44.927322  507510 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:49:44.927545  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetState
	I0116 03:49:44.929553  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .DriverName
	I0116 03:49:44.930008  507510 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0116 03:49:44.930027  507510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0116 03:49:44.930049  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHHostname
	I0116 03:49:44.933353  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:49:44.933768  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:20:1a", ip: ""} in network mk-old-k8s-version-696770: {Iface:virbr3 ExpiryTime:2024-01-16 04:43:38 +0000 UTC Type:0 Mac:52:54:00:37:20:1a Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:old-k8s-version-696770 Clientid:01:52:54:00:37:20:1a}
	I0116 03:49:44.933799  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | domain old-k8s-version-696770 has defined IP address 192.168.61.167 and MAC address 52:54:00:37:20:1a in network mk-old-k8s-version-696770
	I0116 03:49:44.933975  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHPort
	I0116 03:49:44.934184  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHKeyPath
	I0116 03:49:44.934277  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .GetSSHUsername
	I0116 03:49:44.934374  507510 sshutil.go:53] new ssh client: &{IP:192.168.61.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/old-k8s-version-696770/id_rsa Username:docker}
	I0116 03:49:45.044743  507510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 03:49:45.073179  507510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0116 03:49:45.073426  507510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0116 03:49:45.095360  507510 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0116 03:49:45.095383  507510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0116 03:49:45.162632  507510 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0116 03:49:45.162661  507510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0116 03:49:45.252628  507510 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 03:49:45.252665  507510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0116 03:49:45.325535  507510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 03:49:45.533499  507510 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-696770" context rescaled to 1 replicas
	I0116 03:49:45.533553  507510 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.167 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 03:49:45.536655  507510 out.go:177] * Verifying Kubernetes components...
	I0116 03:49:41.418664  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:41.918459  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:42.418296  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:42.918119  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:43.418565  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:43.918746  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:44.418812  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:44.918603  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:45.418865  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:45.918104  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:45.538565  507510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:49:46.390448  507510 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.3456663s)
	I0116 03:49:46.390513  507510 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.31729292s)
	I0116 03:49:46.390536  507510 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.317072847s)
	I0116 03:49:46.390556  507510 main.go:141] libmachine: Making call to close driver server
	I0116 03:49:46.390520  507510 main.go:141] libmachine: Making call to close driver server
	I0116 03:49:46.390573  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .Close
	I0116 03:49:46.390595  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .Close
	I0116 03:49:46.390559  507510 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0116 03:49:46.391000  507510 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:49:46.391023  507510 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:49:46.391035  507510 main.go:141] libmachine: Making call to close driver server
	I0116 03:49:46.391040  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | Closing plugin on server side
	I0116 03:49:46.391006  507510 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:49:46.391059  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | Closing plugin on server side
	I0116 03:49:46.391062  507510 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:49:46.391044  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .Close
	I0116 03:49:46.391075  507510 main.go:141] libmachine: Making call to close driver server
	I0116 03:49:46.391083  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .Close
	I0116 03:49:46.391314  507510 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:49:46.391332  507510 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:49:46.391594  507510 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:49:46.391625  507510 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:49:46.465666  507510 main.go:141] libmachine: Making call to close driver server
	I0116 03:49:46.465688  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .Close
	I0116 03:49:46.466107  507510 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:49:46.466127  507510 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:49:46.597926  507510 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.05930194s)
	I0116 03:49:46.597988  507510 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-696770" to be "Ready" ...
	I0116 03:49:46.597925  507510 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.272324444s)
	I0116 03:49:46.598099  507510 main.go:141] libmachine: Making call to close driver server
	I0116 03:49:46.598123  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .Close
	I0116 03:49:46.598503  507510 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:49:46.598527  507510 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:49:46.598531  507510 main.go:141] libmachine: (old-k8s-version-696770) DBG | Closing plugin on server side
	I0116 03:49:46.598539  507510 main.go:141] libmachine: Making call to close driver server
	I0116 03:49:46.598549  507510 main.go:141] libmachine: (old-k8s-version-696770) Calling .Close
	I0116 03:49:46.598884  507510 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:49:46.598903  507510 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:49:46.598917  507510 addons.go:470] Verifying addon metrics-server=true in "old-k8s-version-696770"
	I0116 03:49:46.600845  507510 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0116 03:49:46.602484  507510 addons.go:505] enable addons completed in 1.755527621s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0116 03:49:46.612929  507510 node_ready.go:49] node "old-k8s-version-696770" has status "Ready":"True"
	I0116 03:49:46.612962  507510 node_ready.go:38] duration metric: took 14.959317ms waiting for node "old-k8s-version-696770" to be "Ready" ...
	I0116 03:49:46.612975  507510 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:49:46.616466  507510 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-h85tj" in "kube-system" namespace to be "Ready" ...
	I0116 03:49:48.628130  507510 pod_ready.go:102] pod "coredns-5644d7b6d9-h85tj" in "kube-system" namespace has status "Ready":"False"
	I0116 03:49:46.418268  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:46.917976  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:47.418645  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:47.917927  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:48.417920  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:48.917939  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:49.418387  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:49.918203  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:50.417930  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:50.918518  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:51.418036  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:51.917981  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:52.418293  507257 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:49:52.635961  507257 kubeadm.go:1088] duration metric: took 12.176857981s to wait for elevateKubeSystemPrivileges.
	I0116 03:49:52.636014  507257 kubeadm.go:406] StartCluster complete in 5m10.892359223s
	I0116 03:49:52.636054  507257 settings.go:142] acquiring lock: {Name:mk3ded99c06d285b174e76622bb0741c8dd2d8fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:49:52.636186  507257 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17965-468241/kubeconfig
	I0116 03:49:52.638885  507257 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-468241/kubeconfig: {Name:mkac3c63bbcba9b4fa1bea3480797757d703fb9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:49:52.639229  507257 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0116 03:49:52.639345  507257 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0116 03:49:52.639439  507257 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-615980"
	I0116 03:49:52.639461  507257 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-615980"
	I0116 03:49:52.639458  507257 addons.go:69] Setting default-storageclass=true in profile "embed-certs-615980"
	W0116 03:49:52.639469  507257 addons.go:243] addon storage-provisioner should already be in state true
	I0116 03:49:52.639482  507257 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-615980"
	I0116 03:49:52.639504  507257 config.go:182] Loaded profile config "embed-certs-615980": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 03:49:52.639541  507257 host.go:66] Checking if "embed-certs-615980" exists ...
	I0116 03:49:52.639562  507257 addons.go:69] Setting metrics-server=true in profile "embed-certs-615980"
	I0116 03:49:52.639579  507257 addons.go:234] Setting addon metrics-server=true in "embed-certs-615980"
	W0116 03:49:52.639591  507257 addons.go:243] addon metrics-server should already be in state true
	I0116 03:49:52.639639  507257 host.go:66] Checking if "embed-certs-615980" exists ...
	I0116 03:49:52.639965  507257 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:49:52.639984  507257 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:49:52.640007  507257 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:49:52.640023  507257 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:49:52.640084  507257 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:49:52.640118  507257 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:49:52.660468  507257 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36595
	I0116 03:49:52.660653  507257 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44367
	I0116 03:49:52.661058  507257 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:49:52.661184  507257 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:49:52.661685  507257 main.go:141] libmachine: Using API Version  1
	I0116 03:49:52.661709  507257 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:49:52.661768  507257 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40717
	I0116 03:49:52.661855  507257 main.go:141] libmachine: Using API Version  1
	I0116 03:49:52.661871  507257 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:49:52.662141  507257 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:49:52.662207  507257 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:49:52.662425  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetState
	I0116 03:49:52.662480  507257 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:49:52.662858  507257 main.go:141] libmachine: Using API Version  1
	I0116 03:49:52.662875  507257 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:49:52.663301  507257 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:49:52.663337  507257 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:49:52.663413  507257 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:49:52.663956  507257 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:49:52.663985  507257 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:49:52.666163  507257 addons.go:234] Setting addon default-storageclass=true in "embed-certs-615980"
	W0116 03:49:52.666190  507257 addons.go:243] addon default-storageclass should already be in state true
	I0116 03:49:52.666224  507257 host.go:66] Checking if "embed-certs-615980" exists ...
	I0116 03:49:52.666630  507257 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:49:52.666672  507257 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:49:52.682228  507257 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37799
	I0116 03:49:52.682743  507257 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:49:52.683402  507257 main.go:141] libmachine: Using API Version  1
	I0116 03:49:52.683425  507257 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:49:52.683719  507257 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36773
	I0116 03:49:52.683893  507257 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:49:52.684125  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetState
	I0116 03:49:52.684589  507257 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:49:52.685108  507257 main.go:141] libmachine: Using API Version  1
	I0116 03:49:52.685128  507257 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:49:52.685607  507257 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:49:52.685627  507257 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42767
	I0116 03:49:52.686073  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetState
	I0116 03:49:52.686329  507257 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:49:52.686781  507257 main.go:141] libmachine: Using API Version  1
	I0116 03:49:52.686804  507257 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:49:52.687167  507257 main.go:141] libmachine: (embed-certs-615980) Calling .DriverName
	I0116 03:49:52.687213  507257 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:49:52.689840  507257 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:49:52.687751  507257 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 03:49:52.689319  507257 main.go:141] libmachine: (embed-certs-615980) Calling .DriverName
	I0116 03:49:52.691584  507257 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 03:49:52.691595  507257 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0116 03:49:52.691610  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHHostname
	I0116 03:49:52.691655  507257 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 03:49:52.693170  507257 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0116 03:49:52.694465  507257 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0116 03:49:52.694478  507257 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0116 03:49:52.694495  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHHostname
	I0116 03:49:52.705398  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:49:52.705440  507257 main.go:141] libmachine: (embed-certs-615980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:a6:40", ip: ""} in network mk-embed-certs-615980: {Iface:virbr4 ExpiryTime:2024-01-16 04:44:24 +0000 UTC Type:0 Mac:52:54:00:d4:a6:40 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-615980 Clientid:01:52:54:00:d4:a6:40}
	I0116 03:49:52.705440  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:49:52.705469  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHPort
	I0116 03:49:52.705475  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined IP address 192.168.72.159 and MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:49:52.705501  507257 main.go:141] libmachine: (embed-certs-615980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:a6:40", ip: ""} in network mk-embed-certs-615980: {Iface:virbr4 ExpiryTime:2024-01-16 04:44:24 +0000 UTC Type:0 Mac:52:54:00:d4:a6:40 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-615980 Clientid:01:52:54:00:d4:a6:40}
	I0116 03:49:52.705516  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined IP address 192.168.72.159 and MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:49:52.705403  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHPort
	I0116 03:49:52.705751  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHKeyPath
	I0116 03:49:52.705813  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHKeyPath
	I0116 03:49:52.705956  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHUsername
	I0116 03:49:52.706078  507257 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/embed-certs-615980/id_rsa Username:docker}
	I0116 03:49:52.706839  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHUsername
	I0116 03:49:52.707045  507257 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/embed-certs-615980/id_rsa Username:docker}
	I0116 03:49:52.713247  507257 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33775
	I0116 03:49:52.714047  507257 main.go:141] libmachine: () Calling .GetVersion
	I0116 03:49:52.714725  507257 main.go:141] libmachine: Using API Version  1
	I0116 03:49:52.714742  507257 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 03:49:52.715212  507257 main.go:141] libmachine: () Calling .GetMachineName
	I0116 03:49:52.715442  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetState
	I0116 03:49:52.717568  507257 main.go:141] libmachine: (embed-certs-615980) Calling .DriverName
	I0116 03:49:52.717813  507257 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0116 03:49:52.717824  507257 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0116 03:49:52.717839  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHHostname
	I0116 03:49:52.720720  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:49:52.721189  507257 main.go:141] libmachine: (embed-certs-615980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:a6:40", ip: ""} in network mk-embed-certs-615980: {Iface:virbr4 ExpiryTime:2024-01-16 04:44:24 +0000 UTC Type:0 Mac:52:54:00:d4:a6:40 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-615980 Clientid:01:52:54:00:d4:a6:40}
	I0116 03:49:52.721205  507257 main.go:141] libmachine: (embed-certs-615980) DBG | domain embed-certs-615980 has defined IP address 192.168.72.159 and MAC address 52:54:00:d4:a6:40 in network mk-embed-certs-615980
	I0116 03:49:52.721414  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHPort
	I0116 03:49:52.721573  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHKeyPath
	I0116 03:49:52.721724  507257 main.go:141] libmachine: (embed-certs-615980) Calling .GetSSHUsername
	I0116 03:49:52.721814  507257 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/embed-certs-615980/id_rsa Username:docker}
	I0116 03:49:52.899474  507257 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0116 03:49:52.971597  507257 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0116 03:49:52.971623  507257 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0116 03:49:52.971955  507257 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 03:49:53.029724  507257 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0116 03:49:53.051410  507257 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0116 03:49:53.051439  507257 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0116 03:49:53.121058  507257 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 03:49:53.121088  507257 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0116 03:49:53.179049  507257 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-615980" context rescaled to 1 replicas
	I0116 03:49:53.179098  507257 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.159 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 03:49:53.181191  507257 out.go:177] * Verifying Kubernetes components...
	I0116 03:49:50.633148  507510 pod_ready.go:92] pod "coredns-5644d7b6d9-h85tj" in "kube-system" namespace has status "Ready":"True"
	I0116 03:49:50.633179  507510 pod_ready.go:81] duration metric: took 4.016682348s waiting for pod "coredns-5644d7b6d9-h85tj" in "kube-system" namespace to be "Ready" ...
	I0116 03:49:50.633194  507510 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rc8xt" in "kube-system" namespace to be "Ready" ...
	I0116 03:49:50.648707  507510 pod_ready.go:92] pod "kube-proxy-rc8xt" in "kube-system" namespace has status "Ready":"True"
	I0116 03:49:50.648737  507510 pod_ready.go:81] duration metric: took 15.535257ms waiting for pod "kube-proxy-rc8xt" in "kube-system" namespace to be "Ready" ...
	I0116 03:49:50.648752  507510 pod_ready.go:38] duration metric: took 4.035762868s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:49:50.648770  507510 api_server.go:52] waiting for apiserver process to appear ...
	I0116 03:49:50.648842  507510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:49:50.665917  507510 api_server.go:72] duration metric: took 5.1323051s to wait for apiserver process to appear ...
	I0116 03:49:50.665954  507510 api_server.go:88] waiting for apiserver healthz status ...
	I0116 03:49:50.665982  507510 api_server.go:253] Checking apiserver healthz at https://192.168.61.167:8443/healthz ...
	I0116 03:49:50.672790  507510 api_server.go:279] https://192.168.61.167:8443/healthz returned 200:
	ok
	I0116 03:49:50.674024  507510 api_server.go:141] control plane version: v1.16.0
	I0116 03:49:50.674059  507510 api_server.go:131] duration metric: took 8.096153ms to wait for apiserver health ...
	I0116 03:49:50.674071  507510 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 03:49:50.677835  507510 system_pods.go:59] 4 kube-system pods found
	I0116 03:49:50.677871  507510 system_pods.go:61] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:49:50.677878  507510 system_pods.go:61] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:49:50.677887  507510 system_pods.go:61] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:49:50.677894  507510 system_pods.go:61] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:49:50.677905  507510 system_pods.go:74] duration metric: took 3.826308ms to wait for pod list to return data ...
	I0116 03:49:50.677914  507510 default_sa.go:34] waiting for default service account to be created ...
	I0116 03:49:50.680932  507510 default_sa.go:45] found service account: "default"
	I0116 03:49:50.680964  507510 default_sa.go:55] duration metric: took 3.041693ms for default service account to be created ...
	I0116 03:49:50.680975  507510 system_pods.go:116] waiting for k8s-apps to be running ...
	I0116 03:49:50.684730  507510 system_pods.go:86] 4 kube-system pods found
	I0116 03:49:50.684759  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:49:50.684767  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:49:50.684778  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:49:50.684785  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:49:50.684811  507510 retry.go:31] will retry after 238.551043ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:49:50.928725  507510 system_pods.go:86] 4 kube-system pods found
	I0116 03:49:50.928761  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:49:50.928768  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:49:50.928779  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:49:50.928786  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:49:50.928816  507510 retry.go:31] will retry after 246.771125ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:49:51.180688  507510 system_pods.go:86] 4 kube-system pods found
	I0116 03:49:51.180727  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:49:51.180736  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:49:51.180747  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:49:51.180755  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:49:51.180780  507510 retry.go:31] will retry after 439.966453ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:49:51.625927  507510 system_pods.go:86] 4 kube-system pods found
	I0116 03:49:51.625958  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:49:51.625964  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:49:51.625970  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:49:51.625975  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:49:51.626001  507510 retry.go:31] will retry after 403.213781ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:49:52.035928  507510 system_pods.go:86] 4 kube-system pods found
	I0116 03:49:52.035994  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:49:52.036003  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:49:52.036014  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:49:52.036022  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:49:52.036064  507510 retry.go:31] will retry after 501.701933ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:49:52.543834  507510 system_pods.go:86] 4 kube-system pods found
	I0116 03:49:52.543874  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:49:52.543883  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:49:52.543894  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:49:52.543904  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:49:52.543929  507510 retry.go:31] will retry after 898.357774ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:49:53.447323  507510 system_pods.go:86] 4 kube-system pods found
	I0116 03:49:53.447356  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:49:53.447364  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:49:53.447373  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:49:53.447382  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:49:53.447405  507510 retry.go:31] will retry after 928.816907ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:49:54.382017  507510 system_pods.go:86] 4 kube-system pods found
	I0116 03:49:54.382046  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:49:54.382052  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:49:54.382058  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:49:54.382065  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:49:54.382085  507510 retry.go:31] will retry after 935.220919ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:49:53.183129  507257 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:49:53.296441  507257 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 03:49:55.162183  507257 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.262649875s)
	I0116 03:49:55.162237  507257 start.go:929] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0116 03:49:55.516930  507257 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.544937669s)
	I0116 03:49:55.516988  507257 main.go:141] libmachine: Making call to close driver server
	I0116 03:49:55.517002  507257 main.go:141] libmachine: (embed-certs-615980) Calling .Close
	I0116 03:49:55.517046  507257 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.487276988s)
	I0116 03:49:55.517101  507257 main.go:141] libmachine: Making call to close driver server
	I0116 03:49:55.517108  507257 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.333941337s)
	I0116 03:49:55.517114  507257 main.go:141] libmachine: (embed-certs-615980) Calling .Close
	I0116 03:49:55.517135  507257 node_ready.go:35] waiting up to 6m0s for node "embed-certs-615980" to be "Ready" ...
	I0116 03:49:55.517496  507257 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:49:55.517496  507257 main.go:141] libmachine: (embed-certs-615980) DBG | Closing plugin on server side
	I0116 03:49:55.517512  507257 main.go:141] libmachine: (embed-certs-615980) DBG | Closing plugin on server side
	I0116 03:49:55.517520  507257 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:49:55.517535  507257 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:49:55.517546  507257 main.go:141] libmachine: Making call to close driver server
	I0116 03:49:55.517548  507257 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:49:55.517559  507257 main.go:141] libmachine: (embed-certs-615980) Calling .Close
	I0116 03:49:55.517566  507257 main.go:141] libmachine: Making call to close driver server
	I0116 03:49:55.517577  507257 main.go:141] libmachine: (embed-certs-615980) Calling .Close
	I0116 03:49:55.517902  507257 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:49:55.517916  507257 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:49:55.517920  507257 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:49:55.517926  507257 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:49:55.537242  507257 node_ready.go:49] node "embed-certs-615980" has status "Ready":"True"
	I0116 03:49:55.537273  507257 node_ready.go:38] duration metric: took 20.119969ms waiting for node "embed-certs-615980" to be "Ready" ...
	I0116 03:49:55.537282  507257 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:49:55.567823  507257 main.go:141] libmachine: Making call to close driver server
	I0116 03:49:55.567859  507257 main.go:141] libmachine: (embed-certs-615980) Calling .Close
	I0116 03:49:55.568264  507257 main.go:141] libmachine: (embed-certs-615980) DBG | Closing plugin on server side
	I0116 03:49:55.568301  507257 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:49:55.568324  507257 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:49:55.571667  507257 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-hxsvz" in "kube-system" namespace to be "Ready" ...
	I0116 03:49:55.962821  507257 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.666330022s)
	I0116 03:49:55.962896  507257 main.go:141] libmachine: Making call to close driver server
	I0116 03:49:55.962915  507257 main.go:141] libmachine: (embed-certs-615980) Calling .Close
	I0116 03:49:55.963282  507257 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:49:55.963304  507257 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:49:55.963317  507257 main.go:141] libmachine: Making call to close driver server
	I0116 03:49:55.963328  507257 main.go:141] libmachine: (embed-certs-615980) Calling .Close
	I0116 03:49:55.964155  507257 main.go:141] libmachine: (embed-certs-615980) DBG | Closing plugin on server side
	I0116 03:49:55.964178  507257 main.go:141] libmachine: Successfully made call to close driver server
	I0116 03:49:55.964190  507257 main.go:141] libmachine: Making call to close connection to plugin binary
	I0116 03:49:55.964209  507257 addons.go:470] Verifying addon metrics-server=true in "embed-certs-615980"
	I0116 03:49:55.967489  507257 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0116 03:49:55.969099  507257 addons.go:505] enable addons completed in 3.329750862s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0116 03:49:57.085999  507257 pod_ready.go:92] pod "coredns-5dd5756b68-hxsvz" in "kube-system" namespace has status "Ready":"True"
	I0116 03:49:57.086034  507257 pod_ready.go:81] duration metric: took 1.514340062s waiting for pod "coredns-5dd5756b68-hxsvz" in "kube-system" namespace to be "Ready" ...
	I0116 03:49:57.086048  507257 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-twbhh" in "kube-system" namespace to be "Ready" ...
	I0116 03:49:57.110886  507257 pod_ready.go:92] pod "coredns-5dd5756b68-twbhh" in "kube-system" namespace has status "Ready":"True"
	I0116 03:49:57.110920  507257 pod_ready.go:81] duration metric: took 24.862165ms waiting for pod "coredns-5dd5756b68-twbhh" in "kube-system" namespace to be "Ready" ...
	I0116 03:49:57.110934  507257 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-615980" in "kube-system" namespace to be "Ready" ...
	I0116 03:49:57.122556  507257 pod_ready.go:92] pod "etcd-embed-certs-615980" in "kube-system" namespace has status "Ready":"True"
	I0116 03:49:57.122588  507257 pod_ready.go:81] duration metric: took 11.643561ms waiting for pod "etcd-embed-certs-615980" in "kube-system" namespace to be "Ready" ...
	I0116 03:49:57.122601  507257 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-615980" in "kube-system" namespace to be "Ready" ...
	I0116 03:49:57.134402  507257 pod_ready.go:92] pod "kube-apiserver-embed-certs-615980" in "kube-system" namespace has status "Ready":"True"
	I0116 03:49:57.134432  507257 pod_ready.go:81] duration metric: took 11.823016ms waiting for pod "kube-apiserver-embed-certs-615980" in "kube-system" namespace to be "Ready" ...
	I0116 03:49:57.134442  507257 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-615980" in "kube-system" namespace to be "Ready" ...
	I0116 03:49:57.152947  507257 pod_ready.go:92] pod "kube-controller-manager-embed-certs-615980" in "kube-system" namespace has status "Ready":"True"
	I0116 03:49:57.152984  507257 pod_ready.go:81] duration metric: took 18.533642ms waiting for pod "kube-controller-manager-embed-certs-615980" in "kube-system" namespace to be "Ready" ...
	I0116 03:49:57.153000  507257 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8rkb5" in "kube-system" namespace to be "Ready" ...
	I0116 03:49:57.921983  507257 pod_ready.go:92] pod "kube-proxy-8rkb5" in "kube-system" namespace has status "Ready":"True"
	I0116 03:49:57.922016  507257 pod_ready.go:81] duration metric: took 769.007434ms waiting for pod "kube-proxy-8rkb5" in "kube-system" namespace to be "Ready" ...
	I0116 03:49:57.922028  507257 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-615980" in "kube-system" namespace to be "Ready" ...
	I0116 03:49:58.322237  507257 pod_ready.go:92] pod "kube-scheduler-embed-certs-615980" in "kube-system" namespace has status "Ready":"True"
	I0116 03:49:58.322267  507257 pod_ready.go:81] duration metric: took 400.23243ms waiting for pod "kube-scheduler-embed-certs-615980" in "kube-system" namespace to be "Ready" ...
	I0116 03:49:58.322280  507257 pod_ready.go:38] duration metric: took 2.78498776s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:49:58.322295  507257 api_server.go:52] waiting for apiserver process to appear ...
	I0116 03:49:58.322357  507257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:49:58.338527  507257 api_server.go:72] duration metric: took 5.159388866s to wait for apiserver process to appear ...
	I0116 03:49:58.338553  507257 api_server.go:88] waiting for apiserver healthz status ...
	I0116 03:49:58.338575  507257 api_server.go:253] Checking apiserver healthz at https://192.168.72.159:8443/healthz ...
	I0116 03:49:58.345758  507257 api_server.go:279] https://192.168.72.159:8443/healthz returned 200:
	ok
	I0116 03:49:58.347531  507257 api_server.go:141] control plane version: v1.28.4
	I0116 03:49:58.347559  507257 api_server.go:131] duration metric: took 8.999388ms to wait for apiserver health ...
	I0116 03:49:58.347573  507257 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 03:49:58.527633  507257 system_pods.go:59] 9 kube-system pods found
	I0116 03:49:58.527676  507257 system_pods.go:61] "coredns-5dd5756b68-hxsvz" [de7da02c-649b-4d29-8a89-5642105b6049] Running
	I0116 03:49:58.527685  507257 system_pods.go:61] "coredns-5dd5756b68-twbhh" [9be49c16-f213-47da-83f4-90fc392eb49e] Running
	I0116 03:49:58.527692  507257 system_pods.go:61] "etcd-embed-certs-615980" [2098148f-0cac-48ce-a607-381b13334438] Running
	I0116 03:49:58.527704  507257 system_pods.go:61] "kube-apiserver-embed-certs-615980" [3d49b47b-da34-4f4d-a8d3-758c0d28c034] Running
	I0116 03:49:58.527711  507257 system_pods.go:61] "kube-controller-manager-embed-certs-615980" [c4f7946d-907d-42ad-8e84-8fa337111688] Running
	I0116 03:49:58.527718  507257 system_pods.go:61] "kube-proxy-8rkb5" [322fae38-3b29-4135-ba3f-c0ff8bda1e4a] Running
	I0116 03:49:58.527725  507257 system_pods.go:61] "kube-scheduler-embed-certs-615980" [882f322f-8686-40a4-a613-e9855ccfb56e] Running
	I0116 03:49:58.527736  507257 system_pods.go:61] "metrics-server-57f55c9bc5-fc7tx" [14a38c13-7a9e-4548-9654-c568ede29e0f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:49:58.527748  507257 system_pods.go:61] "storage-provisioner" [1ce752ad-ce91-462e-ab2b-2af64064eb40] Running
	I0116 03:49:58.527757  507257 system_pods.go:74] duration metric: took 180.177482ms to wait for pod list to return data ...
	I0116 03:49:58.527771  507257 default_sa.go:34] waiting for default service account to be created ...
	I0116 03:49:58.721717  507257 default_sa.go:45] found service account: "default"
	I0116 03:49:58.721749  507257 default_sa.go:55] duration metric: took 193.967755ms for default service account to be created ...
	I0116 03:49:58.721758  507257 system_pods.go:116] waiting for k8s-apps to be running ...
	I0116 03:49:58.925915  507257 system_pods.go:86] 9 kube-system pods found
	I0116 03:49:58.925957  507257 system_pods.go:89] "coredns-5dd5756b68-hxsvz" [de7da02c-649b-4d29-8a89-5642105b6049] Running
	I0116 03:49:58.925964  507257 system_pods.go:89] "coredns-5dd5756b68-twbhh" [9be49c16-f213-47da-83f4-90fc392eb49e] Running
	I0116 03:49:58.925970  507257 system_pods.go:89] "etcd-embed-certs-615980" [2098148f-0cac-48ce-a607-381b13334438] Running
	I0116 03:49:58.925977  507257 system_pods.go:89] "kube-apiserver-embed-certs-615980" [3d49b47b-da34-4f4d-a8d3-758c0d28c034] Running
	I0116 03:49:58.925987  507257 system_pods.go:89] "kube-controller-manager-embed-certs-615980" [c4f7946d-907d-42ad-8e84-8fa337111688] Running
	I0116 03:49:58.925994  507257 system_pods.go:89] "kube-proxy-8rkb5" [322fae38-3b29-4135-ba3f-c0ff8bda1e4a] Running
	I0116 03:49:58.926040  507257 system_pods.go:89] "kube-scheduler-embed-certs-615980" [882f322f-8686-40a4-a613-e9855ccfb56e] Running
	I0116 03:49:58.926063  507257 system_pods.go:89] "metrics-server-57f55c9bc5-fc7tx" [14a38c13-7a9e-4548-9654-c568ede29e0f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:49:58.926070  507257 system_pods.go:89] "storage-provisioner" [1ce752ad-ce91-462e-ab2b-2af64064eb40] Running
	I0116 03:49:58.926087  507257 system_pods.go:126] duration metric: took 204.321811ms to wait for k8s-apps to be running ...
	I0116 03:49:58.926099  507257 system_svc.go:44] waiting for kubelet service to be running ....
	I0116 03:49:58.926159  507257 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:49:58.940982  507257 system_svc.go:56] duration metric: took 14.86844ms WaitForService to wait for kubelet.
	I0116 03:49:58.941019  507257 kubeadm.go:581] duration metric: took 5.761889406s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0116 03:49:58.941051  507257 node_conditions.go:102] verifying NodePressure condition ...
	I0116 03:49:59.121649  507257 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 03:49:59.121681  507257 node_conditions.go:123] node cpu capacity is 2
	I0116 03:49:59.121694  507257 node_conditions.go:105] duration metric: took 180.636851ms to run NodePressure ...
	I0116 03:49:59.121707  507257 start.go:228] waiting for startup goroutines ...
	I0116 03:49:59.121717  507257 start.go:233] waiting for cluster config update ...
	I0116 03:49:59.121730  507257 start.go:242] writing updated cluster config ...
	I0116 03:49:59.122058  507257 ssh_runner.go:195] Run: rm -f paused
	I0116 03:49:59.177472  507257 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0116 03:49:59.179801  507257 out.go:177] * Done! kubectl is now configured to use "embed-certs-615980" cluster and "default" namespace by default
	I0116 03:49:55.324439  507510 system_pods.go:86] 4 kube-system pods found
	I0116 03:49:55.324471  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:49:55.324477  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:49:55.324484  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:49:55.324489  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:49:55.324509  507510 retry.go:31] will retry after 1.168298317s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:49:56.500050  507510 system_pods.go:86] 4 kube-system pods found
	I0116 03:49:56.500090  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:49:56.500098  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:49:56.500111  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:49:56.500118  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:49:56.500142  507510 retry.go:31] will retry after 1.453657977s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:49:57.961220  507510 system_pods.go:86] 4 kube-system pods found
	I0116 03:49:57.961248  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:49:57.961254  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:49:57.961261  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:49:57.961266  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:49:57.961286  507510 retry.go:31] will retry after 1.763969687s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:49:59.731086  507510 system_pods.go:86] 4 kube-system pods found
	I0116 03:49:59.731112  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:49:59.731117  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:49:59.731123  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:49:59.731129  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:49:59.731147  507510 retry.go:31] will retry after 3.185395035s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:50:02.922897  507510 system_pods.go:86] 4 kube-system pods found
	I0116 03:50:02.922934  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:50:02.922944  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:50:02.922954  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:50:02.922961  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:50:02.922985  507510 retry.go:31] will retry after 4.049428323s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:50:06.978002  507510 system_pods.go:86] 4 kube-system pods found
	I0116 03:50:06.978029  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:50:06.978034  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:50:06.978040  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:50:06.978045  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:50:06.978063  507510 retry.go:31] will retry after 4.626513574s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:50:11.610464  507510 system_pods.go:86] 4 kube-system pods found
	I0116 03:50:11.610499  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:50:11.610507  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:50:11.610517  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:50:11.610524  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:50:11.610550  507510 retry.go:31] will retry after 4.683195792s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:50:16.298843  507510 system_pods.go:86] 4 kube-system pods found
	I0116 03:50:16.298873  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:50:16.298879  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:50:16.298888  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:50:16.298892  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:50:16.298913  507510 retry.go:31] will retry after 8.214175219s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:50:24.520982  507510 system_pods.go:86] 5 kube-system pods found
	I0116 03:50:24.521020  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:50:24.521029  507510 system_pods.go:89] "etcd-old-k8s-version-696770" [255a0b2a-df41-4621-8918-36e1b0c25c24] Pending
	I0116 03:50:24.521033  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:50:24.521040  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:50:24.521045  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:50:24.521067  507510 retry.go:31] will retry after 9.626598035s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:50:34.155753  507510 system_pods.go:86] 5 kube-system pods found
	I0116 03:50:34.155790  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:50:34.155798  507510 system_pods.go:89] "etcd-old-k8s-version-696770" [255a0b2a-df41-4621-8918-36e1b0c25c24] Running
	I0116 03:50:34.155805  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:50:34.155815  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:50:34.155822  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:50:34.155849  507510 retry.go:31] will retry after 13.760629262s: missing components: kube-apiserver, kube-controller-manager, kube-scheduler
	I0116 03:50:47.923537  507510 system_pods.go:86] 7 kube-system pods found
	I0116 03:50:47.923571  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:50:47.923577  507510 system_pods.go:89] "etcd-old-k8s-version-696770" [255a0b2a-df41-4621-8918-36e1b0c25c24] Running
	I0116 03:50:47.923582  507510 system_pods.go:89] "kube-apiserver-old-k8s-version-696770" [c682b257-d00b-4b4c-8089-cda1b9da538c] Running
	I0116 03:50:47.923585  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:50:47.923589  507510 system_pods.go:89] "kube-scheduler-old-k8s-version-696770" [af271425-aec7-45d9-97c5-9a033f13a41e] Running
	I0116 03:50:47.923599  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:50:47.923603  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:50:47.923621  507510 retry.go:31] will retry after 15.810378345s: missing components: kube-controller-manager
	I0116 03:51:03.742786  507510 system_pods.go:86] 8 kube-system pods found
	I0116 03:51:03.742819  507510 system_pods.go:89] "coredns-5644d7b6d9-h85tj" [6ad3270c-e6c5-4ced-800f-f6e7960097ac] Running
	I0116 03:51:03.742825  507510 system_pods.go:89] "etcd-old-k8s-version-696770" [255a0b2a-df41-4621-8918-36e1b0c25c24] Running
	I0116 03:51:03.742830  507510 system_pods.go:89] "kube-apiserver-old-k8s-version-696770" [c682b257-d00b-4b4c-8089-cda1b9da538c] Running
	I0116 03:51:03.742835  507510 system_pods.go:89] "kube-controller-manager-old-k8s-version-696770" [87b5ef82-182e-458d-b521-05a36d3d031b] Running
	I0116 03:51:03.742838  507510 system_pods.go:89] "kube-proxy-rc8xt" [433f07f2-79e8-48f6-945a-af3dc0060920] Running
	I0116 03:51:03.742842  507510 system_pods.go:89] "kube-scheduler-old-k8s-version-696770" [af271425-aec7-45d9-97c5-9a033f13a41e] Running
	I0116 03:51:03.742849  507510 system_pods.go:89] "metrics-server-74d5856cc6-stvzf" [92ed6941-1071-4757-9279-144187442f64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0116 03:51:03.742854  507510 system_pods.go:89] "storage-provisioner" [c45ac226-3063-4d53-8a3a-dccca6e8cade] Running
	I0116 03:51:03.742865  507510 system_pods.go:126] duration metric: took 1m13.061883389s to wait for k8s-apps to be running ...
	I0116 03:51:03.742872  507510 system_svc.go:44] waiting for kubelet service to be running ....
	I0116 03:51:03.742921  507510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:51:03.761399  507510 system_svc.go:56] duration metric: took 18.514586ms WaitForService to wait for kubelet.
	I0116 03:51:03.761433  507510 kubeadm.go:581] duration metric: took 1m18.22783177s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0116 03:51:03.761461  507510 node_conditions.go:102] verifying NodePressure condition ...
	I0116 03:51:03.765716  507510 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 03:51:03.765760  507510 node_conditions.go:123] node cpu capacity is 2
	I0116 03:51:03.765777  507510 node_conditions.go:105] duration metric: took 4.309124ms to run NodePressure ...
	I0116 03:51:03.765794  507510 start.go:228] waiting for startup goroutines ...
	I0116 03:51:03.765803  507510 start.go:233] waiting for cluster config update ...
	I0116 03:51:03.765817  507510 start.go:242] writing updated cluster config ...
	I0116 03:51:03.766160  507510 ssh_runner.go:195] Run: rm -f paused
	I0116 03:51:03.822502  507510 start.go:600] kubectl: 1.29.0, cluster: 1.16.0 (minor skew: 13)
	I0116 03:51:03.824687  507510 out.go:177] 
	W0116 03:51:03.826162  507510 out.go:239] ! /usr/local/bin/kubectl is version 1.29.0, which may have incompatibilities with Kubernetes 1.16.0.
	I0116 03:51:03.827659  507510 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0116 03:51:03.829229  507510 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-696770" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Tue 2024-01-16 03:43:37 UTC, ends at Tue 2024-01-16 04:03:31 UTC. --
	Jan 16 04:03:30 old-k8s-version-696770 crio[715]: time="2024-01-16 04:03:30.922488341Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705377810922473805,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:114472,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=ac6bfce8-a35a-46a1-92f6-1c02128354b2 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 04:03:30 old-k8s-version-696770 crio[715]: time="2024-01-16 04:03:30.923172829Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=480abede-3d9a-4078-a4a2-8ccc086fc814 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 04:03:30 old-k8s-version-696770 crio[715]: time="2024-01-16 04:03:30.923233611Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=480abede-3d9a-4078-a4a2-8ccc086fc814 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 04:03:30 old-k8s-version-696770 crio[715]: time="2024-01-16 04:03:30.923440402Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fada0ec84e00786bf019ab1f553ec3d864423c48bb87affed94838adc2503641,PodSandboxId:89368a33b413a031776b03bf7add26e9c79142662e1221fa4cc76f1718d344bb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705376987692443699,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c45ac226-3063-4d53-8a3a-dccca6e8cade,},Annotations:map[string]string{io.kubernetes.container.hash: 823c00b3,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e31b7408ac6d61505c93bd0ee056db05b8065f72033976a79869da0eda891df,PodSandboxId:5d09494a90a0cb05911113aafe4d91d159618e87cab28dee6a6162ef7216a797,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1705376987522398389,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rc8xt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 433f07f2-79e8-48f6-945a-af3dc0060920,},Annotations:map[string]string{io.kubernetes.container.hash: d3f5792e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e647824330362812e08d6745872f94a8bb6a235bdfa053f314b6143f71c061ca,PodSandboxId:89eb0f0d7f5a24ccf7d98b5002c6de23763f95a4495d4f746ecf5e6d6dd831f0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1705376986940017978,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-h85tj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ad3270c-e6c5-4ced-800f-f6e7960097ac,},Annotations:map[string]string{io.kubernetes.container.hash: 538949d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contain
erPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19e1d64306cf954490dc1f2a424f07458029dbafd0c148d61121eb78ffe07f81,PodSandboxId:2fc6ce30e4206ca86317c370088e1505cc72ef648a405aef78dbc31f33d36330,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1705376959826024337,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-696770,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a30650f5c98b0779ac54af241e6784fa,},Annotations:map[st
ring]string{io.kubernetes.container.hash: c56983aa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2348243d3645e74feecf7dbce8cad9836e6b19b215dbd48b3af5ad146519ed8,PodSandboxId:b0ecc1dcc677088cae112b1b2a9d9c4eeb2497163231241d47f262b7492156d1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1705376958167239765,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-696770,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437b
cb4e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ea8576da0bd6430244ffb39b2b3cc282498618fb7d0433d709260bc314ec01e,PodSandboxId:d16982b673cfc28ee712c4e347e21556f5742db17a2c54e531a04b40f063f404,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1705376957992492062,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-696770,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Anno
tations:map[string]string{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d5060e371a626a1d05b1d8a95d487bb85d58e876a3aa66f970430bd665c02b4,PodSandboxId:4e001b6729d354c208b50660b6cff7f6c0731e487a28bf38f458b0d3dcb3e4cb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1705376957324344362,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-696770,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4ce14449bc68e7b0e24764365ae6c5c,},Annotations:map
[string]string{io.kubernetes.container.hash: 476b3afe,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89bde80dffc0fb01a1f5dce6520043ba8f919cd5e304eb1efa323075fa8bf331,PodSandboxId:4e001b6729d354c208b50660b6cff7f6c0731e487a28bf38f458b0d3dcb3e4cb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_EXITED,CreatedAt:1705376649920960106,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-696770,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4ce14449bc68e7b0e24764365ae6c5c,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 476b3afe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=480abede-3d9a-4078-a4a2-8ccc086fc814 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 04:03:30 old-k8s-version-696770 crio[715]: time="2024-01-16 04:03:30.967901285Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=ab9ca477-2914-4dbf-8eb5-04dc3f4b11df name=/runtime.v1.RuntimeService/Version
	Jan 16 04:03:30 old-k8s-version-696770 crio[715]: time="2024-01-16 04:03:30.967964466Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=ab9ca477-2914-4dbf-8eb5-04dc3f4b11df name=/runtime.v1.RuntimeService/Version
	Jan 16 04:03:30 old-k8s-version-696770 crio[715]: time="2024-01-16 04:03:30.969354991Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=ce4d7552-8623-4ec9-81ae-1cc7d6119103 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 04:03:30 old-k8s-version-696770 crio[715]: time="2024-01-16 04:03:30.969936274Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705377810969916997,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:114472,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=ce4d7552-8623-4ec9-81ae-1cc7d6119103 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 04:03:30 old-k8s-version-696770 crio[715]: time="2024-01-16 04:03:30.970616802Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=254de928-7259-453f-8807-dcd9dce0f3fc name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 04:03:30 old-k8s-version-696770 crio[715]: time="2024-01-16 04:03:30.970703052Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=254de928-7259-453f-8807-dcd9dce0f3fc name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 04:03:30 old-k8s-version-696770 crio[715]: time="2024-01-16 04:03:30.971011744Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fada0ec84e00786bf019ab1f553ec3d864423c48bb87affed94838adc2503641,PodSandboxId:89368a33b413a031776b03bf7add26e9c79142662e1221fa4cc76f1718d344bb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705376987692443699,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c45ac226-3063-4d53-8a3a-dccca6e8cade,},Annotations:map[string]string{io.kubernetes.container.hash: 823c00b3,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e31b7408ac6d61505c93bd0ee056db05b8065f72033976a79869da0eda891df,PodSandboxId:5d09494a90a0cb05911113aafe4d91d159618e87cab28dee6a6162ef7216a797,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1705376987522398389,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rc8xt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 433f07f2-79e8-48f6-945a-af3dc0060920,},Annotations:map[string]string{io.kubernetes.container.hash: d3f5792e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e647824330362812e08d6745872f94a8bb6a235bdfa053f314b6143f71c061ca,PodSandboxId:89eb0f0d7f5a24ccf7d98b5002c6de23763f95a4495d4f746ecf5e6d6dd831f0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1705376986940017978,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-h85tj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ad3270c-e6c5-4ced-800f-f6e7960097ac,},Annotations:map[string]string{io.kubernetes.container.hash: 538949d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contain
erPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19e1d64306cf954490dc1f2a424f07458029dbafd0c148d61121eb78ffe07f81,PodSandboxId:2fc6ce30e4206ca86317c370088e1505cc72ef648a405aef78dbc31f33d36330,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1705376959826024337,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-696770,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a30650f5c98b0779ac54af241e6784fa,},Annotations:map[st
ring]string{io.kubernetes.container.hash: c56983aa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2348243d3645e74feecf7dbce8cad9836e6b19b215dbd48b3af5ad146519ed8,PodSandboxId:b0ecc1dcc677088cae112b1b2a9d9c4eeb2497163231241d47f262b7492156d1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1705376958167239765,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-696770,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437b
cb4e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ea8576da0bd6430244ffb39b2b3cc282498618fb7d0433d709260bc314ec01e,PodSandboxId:d16982b673cfc28ee712c4e347e21556f5742db17a2c54e531a04b40f063f404,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1705376957992492062,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-696770,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Anno
tations:map[string]string{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d5060e371a626a1d05b1d8a95d487bb85d58e876a3aa66f970430bd665c02b4,PodSandboxId:4e001b6729d354c208b50660b6cff7f6c0731e487a28bf38f458b0d3dcb3e4cb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1705376957324344362,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-696770,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4ce14449bc68e7b0e24764365ae6c5c,},Annotations:map
[string]string{io.kubernetes.container.hash: 476b3afe,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89bde80dffc0fb01a1f5dce6520043ba8f919cd5e304eb1efa323075fa8bf331,PodSandboxId:4e001b6729d354c208b50660b6cff7f6c0731e487a28bf38f458b0d3dcb3e4cb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_EXITED,CreatedAt:1705376649920960106,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-696770,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4ce14449bc68e7b0e24764365ae6c5c,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 476b3afe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=254de928-7259-453f-8807-dcd9dce0f3fc name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 04:03:31 old-k8s-version-696770 crio[715]: time="2024-01-16 04:03:31.020361478Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=350067ae-a747-4603-8b0e-c773f3bb7e57 name=/runtime.v1.RuntimeService/Version
	Jan 16 04:03:31 old-k8s-version-696770 crio[715]: time="2024-01-16 04:03:31.020473191Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=350067ae-a747-4603-8b0e-c773f3bb7e57 name=/runtime.v1.RuntimeService/Version
	Jan 16 04:03:31 old-k8s-version-696770 crio[715]: time="2024-01-16 04:03:31.022184486Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=3c5bf836-5631-4b40-92a8-8abc01538c50 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 04:03:31 old-k8s-version-696770 crio[715]: time="2024-01-16 04:03:31.022600814Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705377811022587758,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:114472,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=3c5bf836-5631-4b40-92a8-8abc01538c50 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 04:03:31 old-k8s-version-696770 crio[715]: time="2024-01-16 04:03:31.023525048Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=ad98f9ec-55d6-41c4-b366-646c9c8dabaf name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 04:03:31 old-k8s-version-696770 crio[715]: time="2024-01-16 04:03:31.023602259Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=ad98f9ec-55d6-41c4-b366-646c9c8dabaf name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 04:03:31 old-k8s-version-696770 crio[715]: time="2024-01-16 04:03:31.023772281Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fada0ec84e00786bf019ab1f553ec3d864423c48bb87affed94838adc2503641,PodSandboxId:89368a33b413a031776b03bf7add26e9c79142662e1221fa4cc76f1718d344bb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705376987692443699,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c45ac226-3063-4d53-8a3a-dccca6e8cade,},Annotations:map[string]string{io.kubernetes.container.hash: 823c00b3,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e31b7408ac6d61505c93bd0ee056db05b8065f72033976a79869da0eda891df,PodSandboxId:5d09494a90a0cb05911113aafe4d91d159618e87cab28dee6a6162ef7216a797,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1705376987522398389,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rc8xt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 433f07f2-79e8-48f6-945a-af3dc0060920,},Annotations:map[string]string{io.kubernetes.container.hash: d3f5792e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e647824330362812e08d6745872f94a8bb6a235bdfa053f314b6143f71c061ca,PodSandboxId:89eb0f0d7f5a24ccf7d98b5002c6de23763f95a4495d4f746ecf5e6d6dd831f0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1705376986940017978,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-h85tj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ad3270c-e6c5-4ced-800f-f6e7960097ac,},Annotations:map[string]string{io.kubernetes.container.hash: 538949d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contain
erPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19e1d64306cf954490dc1f2a424f07458029dbafd0c148d61121eb78ffe07f81,PodSandboxId:2fc6ce30e4206ca86317c370088e1505cc72ef648a405aef78dbc31f33d36330,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1705376959826024337,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-696770,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a30650f5c98b0779ac54af241e6784fa,},Annotations:map[st
ring]string{io.kubernetes.container.hash: c56983aa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2348243d3645e74feecf7dbce8cad9836e6b19b215dbd48b3af5ad146519ed8,PodSandboxId:b0ecc1dcc677088cae112b1b2a9d9c4eeb2497163231241d47f262b7492156d1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1705376958167239765,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-696770,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437b
cb4e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ea8576da0bd6430244ffb39b2b3cc282498618fb7d0433d709260bc314ec01e,PodSandboxId:d16982b673cfc28ee712c4e347e21556f5742db17a2c54e531a04b40f063f404,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1705376957992492062,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-696770,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Anno
tations:map[string]string{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d5060e371a626a1d05b1d8a95d487bb85d58e876a3aa66f970430bd665c02b4,PodSandboxId:4e001b6729d354c208b50660b6cff7f6c0731e487a28bf38f458b0d3dcb3e4cb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1705376957324344362,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-696770,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4ce14449bc68e7b0e24764365ae6c5c,},Annotations:map
[string]string{io.kubernetes.container.hash: 476b3afe,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89bde80dffc0fb01a1f5dce6520043ba8f919cd5e304eb1efa323075fa8bf331,PodSandboxId:4e001b6729d354c208b50660b6cff7f6c0731e487a28bf38f458b0d3dcb3e4cb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_EXITED,CreatedAt:1705376649920960106,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-696770,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4ce14449bc68e7b0e24764365ae6c5c,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 476b3afe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=ad98f9ec-55d6-41c4-b366-646c9c8dabaf name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 04:03:31 old-k8s-version-696770 crio[715]: time="2024-01-16 04:03:31.061221310Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=1337f2fb-9250-4630-ba87-0b3ce2719fcb name=/runtime.v1.RuntimeService/Version
	Jan 16 04:03:31 old-k8s-version-696770 crio[715]: time="2024-01-16 04:03:31.061358864Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=1337f2fb-9250-4630-ba87-0b3ce2719fcb name=/runtime.v1.RuntimeService/Version
	Jan 16 04:03:31 old-k8s-version-696770 crio[715]: time="2024-01-16 04:03:31.062712722Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=eefeb6eb-c2cf-45d2-827f-181db7766a18 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 04:03:31 old-k8s-version-696770 crio[715]: time="2024-01-16 04:03:31.063246809Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1705377811063224844,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:114472,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=eefeb6eb-c2cf-45d2-827f-181db7766a18 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 16 04:03:31 old-k8s-version-696770 crio[715]: time="2024-01-16 04:03:31.064327051Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7c056fe0-404a-4470-be98-3a68c27aaad4 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 04:03:31 old-k8s-version-696770 crio[715]: time="2024-01-16 04:03:31.064406297Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7c056fe0-404a-4470-be98-3a68c27aaad4 name=/runtime.v1.RuntimeService/ListContainers
	Jan 16 04:03:31 old-k8s-version-696770 crio[715]: time="2024-01-16 04:03:31.064650852Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fada0ec84e00786bf019ab1f553ec3d864423c48bb87affed94838adc2503641,PodSandboxId:89368a33b413a031776b03bf7add26e9c79142662e1221fa4cc76f1718d344bb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1705376987692443699,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c45ac226-3063-4d53-8a3a-dccca6e8cade,},Annotations:map[string]string{io.kubernetes.container.hash: 823c00b3,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e31b7408ac6d61505c93bd0ee056db05b8065f72033976a79869da0eda891df,PodSandboxId:5d09494a90a0cb05911113aafe4d91d159618e87cab28dee6a6162ef7216a797,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1705376987522398389,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rc8xt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 433f07f2-79e8-48f6-945a-af3dc0060920,},Annotations:map[string]string{io.kubernetes.container.hash: d3f5792e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e647824330362812e08d6745872f94a8bb6a235bdfa053f314b6143f71c061ca,PodSandboxId:89eb0f0d7f5a24ccf7d98b5002c6de23763f95a4495d4f746ecf5e6d6dd831f0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1705376986940017978,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-h85tj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ad3270c-e6c5-4ced-800f-f6e7960097ac,},Annotations:map[string]string{io.kubernetes.container.hash: 538949d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contain
erPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19e1d64306cf954490dc1f2a424f07458029dbafd0c148d61121eb78ffe07f81,PodSandboxId:2fc6ce30e4206ca86317c370088e1505cc72ef648a405aef78dbc31f33d36330,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1705376959826024337,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-696770,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a30650f5c98b0779ac54af241e6784fa,},Annotations:map[st
ring]string{io.kubernetes.container.hash: c56983aa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2348243d3645e74feecf7dbce8cad9836e6b19b215dbd48b3af5ad146519ed8,PodSandboxId:b0ecc1dcc677088cae112b1b2a9d9c4eeb2497163231241d47f262b7492156d1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1705376958167239765,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-696770,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437b
cb4e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ea8576da0bd6430244ffb39b2b3cc282498618fb7d0433d709260bc314ec01e,PodSandboxId:d16982b673cfc28ee712c4e347e21556f5742db17a2c54e531a04b40f063f404,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1705376957992492062,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-696770,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Anno
tations:map[string]string{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d5060e371a626a1d05b1d8a95d487bb85d58e876a3aa66f970430bd665c02b4,PodSandboxId:4e001b6729d354c208b50660b6cff7f6c0731e487a28bf38f458b0d3dcb3e4cb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1705376957324344362,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-696770,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4ce14449bc68e7b0e24764365ae6c5c,},Annotations:map
[string]string{io.kubernetes.container.hash: 476b3afe,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89bde80dffc0fb01a1f5dce6520043ba8f919cd5e304eb1efa323075fa8bf331,PodSandboxId:4e001b6729d354c208b50660b6cff7f6c0731e487a28bf38f458b0d3dcb3e4cb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_EXITED,CreatedAt:1705376649920960106,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-696770,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4ce14449bc68e7b0e24764365ae6c5c,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 476b3afe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7c056fe0-404a-4470-be98-3a68c27aaad4 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	fada0ec84e007       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 minutes ago      Running             storage-provisioner       0                   89368a33b413a       storage-provisioner
	8e31b7408ac6d       c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384   13 minutes ago      Running             kube-proxy                0                   5d09494a90a0c       kube-proxy-rc8xt
	e647824330362       bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b   13 minutes ago      Running             coredns                   0                   89eb0f0d7f5a2       coredns-5644d7b6d9-h85tj
	19e1d64306cf9       b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed   14 minutes ago      Running             etcd                      0                   2fc6ce30e4206       etcd-old-k8s-version-696770
	d2348243d3645       06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d   14 minutes ago      Running             kube-controller-manager   0                   b0ecc1dcc6770       kube-controller-manager-old-k8s-version-696770
	6ea8576da0bd6       301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a   14 minutes ago      Running             kube-scheduler            0                   d16982b673cfc       kube-scheduler-old-k8s-version-696770
	1d5060e371a62       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e   14 minutes ago      Running             kube-apiserver            1                   4e001b6729d35       kube-apiserver-old-k8s-version-696770
	89bde80dffc0f       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e   19 minutes ago      Exited              kube-apiserver            0                   4e001b6729d35       kube-apiserver-old-k8s-version-696770
	
	
	==> coredns [e647824330362812e08d6745872f94a8bb6a235bdfa053f314b6143f71c061ca] <==
	.:53
	2024-01-16T03:49:47.339Z [INFO] plugin/reload: Running configuration MD5 = 7bc8613a521eb1a1737fc3e7c0fea3ca
	2024-01-16T03:49:47.339Z [INFO] CoreDNS-1.6.2
	2024-01-16T03:49:47.339Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	2024-01-16T03:49:48.349Z [INFO] 127.0.0.1:43045 - 5720 "HINFO IN 1115856692617163381.8279879992632663013. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009874865s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-696770
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-696770
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e8fa5f64d0e7272be43ff25ed3826261f0a2578
	                    minikube.k8s.io/name=old-k8s-version-696770
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_16T03_49_29_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Jan 2024 03:49:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Jan 2024 04:03:25 +0000   Tue, 16 Jan 2024 03:49:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Jan 2024 04:03:25 +0000   Tue, 16 Jan 2024 03:49:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Jan 2024 04:03:25 +0000   Tue, 16 Jan 2024 03:49:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Jan 2024 04:03:25 +0000   Tue, 16 Jan 2024 03:49:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.167
	  Hostname:    old-k8s-version-696770
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	System Info:
	 Machine ID:                 6aadd6cb8e644a759c807837a966bad8
	 System UUID:                6aadd6cb-8e64-4a75-9c80-7837a966bad8
	 Boot ID:                    179152c5-5431-4fb3-9296-b52c3ea84c5e
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  cri-o://1.24.1
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (8 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                coredns-5644d7b6d9-h85tj                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                etcd-old-k8s-version-696770                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                kube-apiserver-old-k8s-version-696770             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                kube-controller-manager-old-k8s-version-696770    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                kube-proxy-rc8xt                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                kube-scheduler-old-k8s-version-696770             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                metrics-server-74d5856cc6-stvzf                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                750m (37%!)(MISSING)   0 (0%!)(MISSING)
	  memory             270Mi (12%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From                                Message
	  ----    ------                   ----               ----                                -------
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)  kubelet, old-k8s-version-696770     Node old-k8s-version-696770 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)  kubelet, old-k8s-version-696770     Node old-k8s-version-696770 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)  kubelet, old-k8s-version-696770     Node old-k8s-version-696770 status is now: NodeHasSufficientPID
	  Normal  Starting                 13m                kube-proxy, old-k8s-version-696770  Starting kube-proxy.
	
	
	==> dmesg <==
	[Jan16 03:43] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.069288] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.557314] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.521156] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.158814] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.623626] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.209369] systemd-fstab-generator[640]: Ignoring "noauto" for root device
	[  +0.107918] systemd-fstab-generator[651]: Ignoring "noauto" for root device
	[  +0.155143] systemd-fstab-generator[664]: Ignoring "noauto" for root device
	[  +0.120971] systemd-fstab-generator[675]: Ignoring "noauto" for root device
	[  +0.224849] systemd-fstab-generator[699]: Ignoring "noauto" for root device
	[Jan16 03:44] systemd-fstab-generator[1017]: Ignoring "noauto" for root device
	[  +0.480691] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[ +28.245235] kauditd_printk_skb: 13 callbacks suppressed
	[  +6.064811] kauditd_printk_skb: 2 callbacks suppressed
	[Jan16 03:49] systemd-fstab-generator[3093]: Ignoring "noauto" for root device
	[  +0.769070] kauditd_printk_skb: 6 callbacks suppressed
	[Jan16 03:50] kauditd_printk_skb: 4 callbacks suppressed
	
	
	==> etcd [19e1d64306cf954490dc1f2a424f07458029dbafd0c148d61121eb78ffe07f81] <==
	2024-01-16 03:49:20.019205 I | raft: newRaft ec72a22dc6b2db62 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	2024-01-16 03:49:20.019221 I | raft: ec72a22dc6b2db62 became follower at term 1
	2024-01-16 03:49:20.028570 W | auth: simple token is not cryptographically signed
	2024-01-16 03:49:20.035668 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2024-01-16 03:49:20.036964 I | etcdserver: ec72a22dc6b2db62 as single-node; fast-forwarding 9 ticks (election ticks 10)
	2024-01-16 03:49:20.037511 I | etcdserver/membership: added member ec72a22dc6b2db62 [https://192.168.61.167:2380] to cluster c318a198f49b85fe
	2024-01-16 03:49:20.039619 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2024-01-16 03:49:20.040253 I | embed: listening for metrics on http://192.168.61.167:2381
	2024-01-16 03:49:20.040581 I | embed: listening for metrics on http://127.0.0.1:2381
	2024-01-16 03:49:20.519980 I | raft: ec72a22dc6b2db62 is starting a new election at term 1
	2024-01-16 03:49:20.520033 I | raft: ec72a22dc6b2db62 became candidate at term 2
	2024-01-16 03:49:20.520046 I | raft: ec72a22dc6b2db62 received MsgVoteResp from ec72a22dc6b2db62 at term 2
	2024-01-16 03:49:20.520056 I | raft: ec72a22dc6b2db62 became leader at term 2
	2024-01-16 03:49:20.520060 I | raft: raft.node: ec72a22dc6b2db62 elected leader ec72a22dc6b2db62 at term 2
	2024-01-16 03:49:20.520509 I | etcdserver: setting up the initial cluster version to 3.3
	2024-01-16 03:49:20.522257 N | etcdserver/membership: set the initial cluster version to 3.3
	2024-01-16 03:49:20.522372 I | etcdserver/api: enabled capabilities for version 3.3
	2024-01-16 03:49:20.522448 I | etcdserver: published {Name:old-k8s-version-696770 ClientURLs:[https://192.168.61.167:2379]} to cluster c318a198f49b85fe
	2024-01-16 03:49:20.522546 I | embed: ready to serve client requests
	2024-01-16 03:49:20.522703 I | embed: ready to serve client requests
	2024-01-16 03:49:20.524551 I | embed: serving client requests on 192.168.61.167:2379
	2024-01-16 03:49:20.527728 I | embed: serving client requests on 127.0.0.1:2379
	2024-01-16 03:49:45.990414 W | etcdserver: request "header:<ID:15808352743548917341 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-proxy-rc8xt.17aab7552447a0c4\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-proxy-rc8xt.17aab7552447a0c4\" value_size:428 lease:6584980706694141528 >> failure:<>>" with result "size:16" took too long (401.165768ms) to execute
	2024-01-16 03:59:21.180440 I | mvcc: store.index: compact 645
	2024-01-16 03:59:21.182573 I | mvcc: finished scheduled compaction at 645 (took 1.634604ms)
	
	
	==> kernel <==
	 04:03:31 up 20 min,  0 users,  load average: 0.03, 0.15, 0.18
	Linux old-k8s-version-696770 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [1d5060e371a626a1d05b1d8a95d487bb85d58e876a3aa66f970430bd665c02b4] <==
	I0116 03:55:25.633750       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0116 03:55:25.633949       1 handler_proxy.go:99] no RequestInfo found in the context
	E0116 03:55:25.633990       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0116 03:55:25.633998       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0116 03:57:25.634526       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0116 03:57:25.635107       1 handler_proxy.go:99] no RequestInfo found in the context
	E0116 03:57:25.635300       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0116 03:57:25.635349       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0116 03:59:25.636378       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0116 03:59:25.636769       1 handler_proxy.go:99] no RequestInfo found in the context
	E0116 03:59:25.637071       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0116 03:59:25.637105       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0116 04:00:25.637512       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0116 04:00:25.637656       1 handler_proxy.go:99] no RequestInfo found in the context
	E0116 04:00:25.637924       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0116 04:00:25.637968       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0116 04:02:25.638657       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0116 04:02:25.638764       1 handler_proxy.go:99] no RequestInfo found in the context
	E0116 04:02:25.638913       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0116 04:02:25.638937       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [89bde80dffc0fb01a1f5dce6520043ba8f919cd5e304eb1efa323075fa8bf331] <==
	W0116 03:49:14.029642       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:49:14.029661       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:49:14.029679       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:49:14.029695       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:49:14.029898       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:49:14.029980       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:49:14.030002       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:49:14.030018       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:49:14.030040       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:49:14.030060       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:49:14.030094       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:49:14.030627       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:49:14.030659       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:49:14.030686       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:49:14.030714       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:49:14.030740       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:49:14.030766       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:49:14.030873       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:49:14.030892       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:49:14.030914       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:49:14.030913       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:49:15.307770       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:49:15.325136       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:49:15.328633       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0116 03:49:15.330117       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	
	
	==> kube-controller-manager [d2348243d3645e74feecf7dbce8cad9836e6b19b215dbd48b3af5ad146519ed8] <==
	W0116 03:57:13.143645       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0116 03:57:18.882699       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0116 03:57:45.145647       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0116 03:57:49.135284       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0116 03:58:17.148135       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0116 03:58:19.387534       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0116 03:58:49.150693       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0116 03:58:49.639605       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E0116 03:59:19.891668       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0116 03:59:21.153160       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0116 03:59:50.143506       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0116 03:59:53.155482       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0116 04:00:20.395683       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0116 04:00:25.157885       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0116 04:00:50.648376       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0116 04:00:57.160305       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0116 04:01:20.900626       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0116 04:01:29.162659       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0116 04:01:51.153609       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0116 04:02:01.165105       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0116 04:02:21.405926       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0116 04:02:33.167408       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0116 04:02:51.658089       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0116 04:03:05.169911       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0116 04:03:21.911088       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	
	
	==> kube-proxy [8e31b7408ac6d61505c93bd0ee056db05b8065f72033976a79869da0eda891df] <==
	W0116 03:49:48.047713       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I0116 03:49:48.071964       1 node.go:135] Successfully retrieved node IP: 192.168.61.167
	I0116 03:49:48.072113       1 server_others.go:149] Using iptables Proxier.
	I0116 03:49:48.074442       1 server.go:529] Version: v1.16.0
	I0116 03:49:48.076547       1 config.go:313] Starting service config controller
	I0116 03:49:48.076624       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0116 03:49:48.078089       1 config.go:131] Starting endpoints config controller
	I0116 03:49:48.078146       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0116 03:49:48.178325       1 shared_informer.go:204] Caches are synced for service config 
	I0116 03:49:48.183518       1 shared_informer.go:204] Caches are synced for endpoints config 
	
	
	==> kube-scheduler [6ea8576da0bd6430244ffb39b2b3cc282498618fb7d0433d709260bc314ec01e] <==
	W0116 03:49:24.645969       1 authentication.go:79] Authentication is disabled
	I0116 03:49:24.645980       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I0116 03:49:24.646348       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E0116 03:49:24.678950       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0116 03:49:24.768757       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0116 03:49:24.782388       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0116 03:49:24.782966       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0116 03:49:24.784345       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0116 03:49:24.787511       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0116 03:49:24.787750       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0116 03:49:24.793275       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0116 03:49:24.793424       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0116 03:49:24.793491       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0116 03:49:24.795267       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0116 03:49:25.681908       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0116 03:49:25.771303       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0116 03:49:25.785399       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0116 03:49:25.799176       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0116 03:49:25.799429       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0116 03:49:25.800003       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0116 03:49:25.800389       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0116 03:49:25.800740       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0116 03:49:25.804682       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0116 03:49:25.806926       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0116 03:49:25.808214       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-01-16 03:43:37 UTC, ends at Tue 2024-01-16 04:03:31 UTC. --
	Jan 16 03:59:10 old-k8s-version-696770 kubelet[3099]: E0116 03:59:10.683736    3099 pod_workers.go:191] Error syncing pod 92ed6941-1071-4757-9279-144187442f64 ("metrics-server-74d5856cc6-stvzf_kube-system(92ed6941-1071-4757-9279-144187442f64)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 16 03:59:16 old-k8s-version-696770 kubelet[3099]: E0116 03:59:16.813146    3099 container_manager_linux.go:510] failed to find cgroups of kubelet - cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service
	Jan 16 03:59:25 old-k8s-version-696770 kubelet[3099]: E0116 03:59:25.683682    3099 pod_workers.go:191] Error syncing pod 92ed6941-1071-4757-9279-144187442f64 ("metrics-server-74d5856cc6-stvzf_kube-system(92ed6941-1071-4757-9279-144187442f64)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 16 03:59:38 old-k8s-version-696770 kubelet[3099]: E0116 03:59:38.683565    3099 pod_workers.go:191] Error syncing pod 92ed6941-1071-4757-9279-144187442f64 ("metrics-server-74d5856cc6-stvzf_kube-system(92ed6941-1071-4757-9279-144187442f64)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 16 03:59:51 old-k8s-version-696770 kubelet[3099]: E0116 03:59:51.683635    3099 pod_workers.go:191] Error syncing pod 92ed6941-1071-4757-9279-144187442f64 ("metrics-server-74d5856cc6-stvzf_kube-system(92ed6941-1071-4757-9279-144187442f64)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 16 04:00:02 old-k8s-version-696770 kubelet[3099]: E0116 04:00:02.684335    3099 pod_workers.go:191] Error syncing pod 92ed6941-1071-4757-9279-144187442f64 ("metrics-server-74d5856cc6-stvzf_kube-system(92ed6941-1071-4757-9279-144187442f64)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 16 04:00:16 old-k8s-version-696770 kubelet[3099]: E0116 04:00:16.696140    3099 pod_workers.go:191] Error syncing pod 92ed6941-1071-4757-9279-144187442f64 ("metrics-server-74d5856cc6-stvzf_kube-system(92ed6941-1071-4757-9279-144187442f64)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 16 04:00:27 old-k8s-version-696770 kubelet[3099]: E0116 04:00:27.683546    3099 pod_workers.go:191] Error syncing pod 92ed6941-1071-4757-9279-144187442f64 ("metrics-server-74d5856cc6-stvzf_kube-system(92ed6941-1071-4757-9279-144187442f64)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 16 04:00:39 old-k8s-version-696770 kubelet[3099]: E0116 04:00:39.685385    3099 pod_workers.go:191] Error syncing pod 92ed6941-1071-4757-9279-144187442f64 ("metrics-server-74d5856cc6-stvzf_kube-system(92ed6941-1071-4757-9279-144187442f64)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 16 04:00:50 old-k8s-version-696770 kubelet[3099]: E0116 04:00:50.698861    3099 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 16 04:00:50 old-k8s-version-696770 kubelet[3099]: E0116 04:00:50.698995    3099 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 16 04:00:50 old-k8s-version-696770 kubelet[3099]: E0116 04:00:50.699052    3099 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 16 04:00:50 old-k8s-version-696770 kubelet[3099]: E0116 04:00:50.699088    3099 pod_workers.go:191] Error syncing pod 92ed6941-1071-4757-9279-144187442f64 ("metrics-server-74d5856cc6-stvzf_kube-system(92ed6941-1071-4757-9279-144187442f64)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Jan 16 04:01:01 old-k8s-version-696770 kubelet[3099]: E0116 04:01:01.683951    3099 pod_workers.go:191] Error syncing pod 92ed6941-1071-4757-9279-144187442f64 ("metrics-server-74d5856cc6-stvzf_kube-system(92ed6941-1071-4757-9279-144187442f64)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 16 04:01:15 old-k8s-version-696770 kubelet[3099]: E0116 04:01:15.684519    3099 pod_workers.go:191] Error syncing pod 92ed6941-1071-4757-9279-144187442f64 ("metrics-server-74d5856cc6-stvzf_kube-system(92ed6941-1071-4757-9279-144187442f64)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 16 04:01:27 old-k8s-version-696770 kubelet[3099]: E0116 04:01:27.684249    3099 pod_workers.go:191] Error syncing pod 92ed6941-1071-4757-9279-144187442f64 ("metrics-server-74d5856cc6-stvzf_kube-system(92ed6941-1071-4757-9279-144187442f64)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 16 04:01:41 old-k8s-version-696770 kubelet[3099]: E0116 04:01:41.684390    3099 pod_workers.go:191] Error syncing pod 92ed6941-1071-4757-9279-144187442f64 ("metrics-server-74d5856cc6-stvzf_kube-system(92ed6941-1071-4757-9279-144187442f64)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 16 04:01:52 old-k8s-version-696770 kubelet[3099]: E0116 04:01:52.686149    3099 pod_workers.go:191] Error syncing pod 92ed6941-1071-4757-9279-144187442f64 ("metrics-server-74d5856cc6-stvzf_kube-system(92ed6941-1071-4757-9279-144187442f64)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 16 04:02:06 old-k8s-version-696770 kubelet[3099]: E0116 04:02:06.687665    3099 pod_workers.go:191] Error syncing pod 92ed6941-1071-4757-9279-144187442f64 ("metrics-server-74d5856cc6-stvzf_kube-system(92ed6941-1071-4757-9279-144187442f64)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 16 04:02:18 old-k8s-version-696770 kubelet[3099]: E0116 04:02:18.683732    3099 pod_workers.go:191] Error syncing pod 92ed6941-1071-4757-9279-144187442f64 ("metrics-server-74d5856cc6-stvzf_kube-system(92ed6941-1071-4757-9279-144187442f64)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 16 04:02:29 old-k8s-version-696770 kubelet[3099]: E0116 04:02:29.683454    3099 pod_workers.go:191] Error syncing pod 92ed6941-1071-4757-9279-144187442f64 ("metrics-server-74d5856cc6-stvzf_kube-system(92ed6941-1071-4757-9279-144187442f64)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 16 04:02:43 old-k8s-version-696770 kubelet[3099]: E0116 04:02:43.683769    3099 pod_workers.go:191] Error syncing pod 92ed6941-1071-4757-9279-144187442f64 ("metrics-server-74d5856cc6-stvzf_kube-system(92ed6941-1071-4757-9279-144187442f64)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 16 04:02:56 old-k8s-version-696770 kubelet[3099]: E0116 04:02:56.684140    3099 pod_workers.go:191] Error syncing pod 92ed6941-1071-4757-9279-144187442f64 ("metrics-server-74d5856cc6-stvzf_kube-system(92ed6941-1071-4757-9279-144187442f64)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 16 04:03:11 old-k8s-version-696770 kubelet[3099]: E0116 04:03:11.684072    3099 pod_workers.go:191] Error syncing pod 92ed6941-1071-4757-9279-144187442f64 ("metrics-server-74d5856cc6-stvzf_kube-system(92ed6941-1071-4757-9279-144187442f64)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 16 04:03:22 old-k8s-version-696770 kubelet[3099]: E0116 04:03:22.683726    3099 pod_workers.go:191] Error syncing pod 92ed6941-1071-4757-9279-144187442f64 ("metrics-server-74d5856cc6-stvzf_kube-system(92ed6941-1071-4757-9279-144187442f64)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	
	==> storage-provisioner [fada0ec84e00786bf019ab1f553ec3d864423c48bb87affed94838adc2503641] <==
	I0116 03:49:48.018162       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0116 03:49:48.032416       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0116 03:49:48.032755       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0116 03:49:48.051910       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0116 03:49:48.052397       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-696770_13756098-faf2-41ee-ad13-f44428773837!
	I0116 03:49:48.060292       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8f74b8f6-0dff-418f-9281-00d4c0973e04", APIVersion:"v1", ResourceVersion:"399", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-696770_13756098-faf2-41ee-ad13-f44428773837 became leader
	I0116 03:49:48.153487       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-696770_13756098-faf2-41ee-ad13-f44428773837!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-696770 -n old-k8s-version-696770
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-696770 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5856cc6-stvzf
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-696770 describe pod metrics-server-74d5856cc6-stvzf
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-696770 describe pod metrics-server-74d5856cc6-stvzf: exit status 1 (83.620829ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5856cc6-stvzf" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-696770 describe pod metrics-server-74d5856cc6-stvzf: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (205.35s)

                                                
                                    

Test pass (248/310)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 6.85
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.09
9 TestDownloadOnly/v1.16.0/DeleteAll 0.16
10 TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.28.4/json-events 4.93
13 TestDownloadOnly/v1.28.4/preload-exists 0
17 TestDownloadOnly/v1.28.4/LogsDuration 0.08
18 TestDownloadOnly/v1.28.4/DeleteAll 0.16
19 TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds 0.15
21 TestDownloadOnly/v1.29.0-rc.2/json-events 4.26
22 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
26 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.08
27 TestDownloadOnly/v1.29.0-rc.2/DeleteAll 0.15
28 TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds 0.14
30 TestBinaryMirror 0.62
31 TestOffline 95.25
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
36 TestAddons/Setup 164.78
38 TestAddons/parallel/Registry 19.86
40 TestAddons/parallel/InspektorGadget 12.47
41 TestAddons/parallel/MetricsServer 7.33
42 TestAddons/parallel/HelmTiller 10.83
44 TestAddons/parallel/CSI 67.12
45 TestAddons/parallel/Headlamp 13.65
46 TestAddons/parallel/CloudSpanner 5.73
47 TestAddons/parallel/LocalPath 53.57
48 TestAddons/parallel/NvidiaDevicePlugin 5.61
49 TestAddons/parallel/Yakd 6.01
52 TestAddons/serial/GCPAuth/Namespaces 0.12
54 TestCertOptions 61.29
55 TestCertExpiration 306.91
57 TestForceSystemdFlag 116.2
58 TestForceSystemdEnv 52.51
60 TestKVMDriverInstallOrUpdate 1.21
64 TestErrorSpam/setup 46.43
65 TestErrorSpam/start 0.43
66 TestErrorSpam/status 0.84
67 TestErrorSpam/pause 1.68
68 TestErrorSpam/unpause 1.76
69 TestErrorSpam/stop 2.28
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 72.1
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 36.47
76 TestFunctional/serial/KubeContext 0.05
77 TestFunctional/serial/KubectlGetPods 0.07
80 TestFunctional/serial/CacheCmd/cache/add_remote 3.44
81 TestFunctional/serial/CacheCmd/cache/add_local 1.16
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
83 TestFunctional/serial/CacheCmd/cache/list 0.07
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.28
85 TestFunctional/serial/CacheCmd/cache/cache_reload 1.91
86 TestFunctional/serial/CacheCmd/cache/delete 0.14
87 TestFunctional/serial/MinikubeKubectlCmd 0.14
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
89 TestFunctional/serial/ExtraConfig 36.71
90 TestFunctional/serial/ComponentHealth 0.07
91 TestFunctional/serial/LogsCmd 1.64
92 TestFunctional/serial/LogsFileCmd 1.64
93 TestFunctional/serial/InvalidService 4.28
95 TestFunctional/parallel/ConfigCmd 0.51
96 TestFunctional/parallel/DashboardCmd 30.22
97 TestFunctional/parallel/DryRun 0.36
98 TestFunctional/parallel/InternationalLanguage 0.18
99 TestFunctional/parallel/StatusCmd 1.5
103 TestFunctional/parallel/ServiceCmdConnect 10.02
104 TestFunctional/parallel/AddonsCmd 0.18
105 TestFunctional/parallel/PersistentVolumeClaim 33.76
107 TestFunctional/parallel/SSHCmd 0.49
108 TestFunctional/parallel/CpCmd 1.63
109 TestFunctional/parallel/MySQL 29.98
110 TestFunctional/parallel/FileSync 0.31
111 TestFunctional/parallel/CertSync 1.85
115 TestFunctional/parallel/NodeLabels 0.08
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.54
119 TestFunctional/parallel/License 0.24
120 TestFunctional/parallel/ServiceCmd/DeployApp 14.23
121 TestFunctional/parallel/ProfileCmd/profile_not_create 0.51
122 TestFunctional/parallel/MountCmd/any-port 15.22
123 TestFunctional/parallel/ProfileCmd/profile_list 0.33
124 TestFunctional/parallel/ProfileCmd/profile_json_output 0.33
125 TestFunctional/parallel/UpdateContextCmd/no_changes 0.13
126 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.14
127 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.14
128 TestFunctional/parallel/ServiceCmd/List 0.9
129 TestFunctional/parallel/ServiceCmd/JSONOutput 0.93
130 TestFunctional/parallel/MountCmd/specific-port 2.23
131 TestFunctional/parallel/ServiceCmd/HTTPS 0.36
132 TestFunctional/parallel/ServiceCmd/Format 0.46
133 TestFunctional/parallel/ServiceCmd/URL 0.62
134 TestFunctional/parallel/MountCmd/VerifyCleanup 1.72
144 TestFunctional/parallel/Version/short 0.09
145 TestFunctional/parallel/Version/components 0.91
146 TestFunctional/parallel/ImageCommands/ImageListShort 0.3
147 TestFunctional/parallel/ImageCommands/ImageListTable 0.32
148 TestFunctional/parallel/ImageCommands/ImageListJson 0.28
149 TestFunctional/parallel/ImageCommands/ImageListYaml 0.32
150 TestFunctional/parallel/ImageCommands/ImageBuild 3.03
151 TestFunctional/parallel/ImageCommands/Setup 1.13
152 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.46
153 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.56
154 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.18
155 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.25
156 TestFunctional/parallel/ImageCommands/ImageRemove 0.58
157 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.55
158 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.25
159 TestFunctional/delete_addon-resizer_images 0.07
160 TestFunctional/delete_my-image_image 0.02
161 TestFunctional/delete_minikube_cached_images 0.02
165 TestIngressAddonLegacy/StartLegacyK8sCluster 80.85
167 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 13.58
168 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.63
172 TestJSONOutput/start/Command 72.98
173 TestJSONOutput/start/Audit 0
175 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
176 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
178 TestJSONOutput/pause/Command 0.73
179 TestJSONOutput/pause/Audit 0
181 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
182 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
184 TestJSONOutput/unpause/Command 0.7
185 TestJSONOutput/unpause/Audit 0
187 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/stop/Command 7.12
191 TestJSONOutput/stop/Audit 0
193 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
195 TestErrorJSONOutput 0.24
200 TestMainNoArgs 0.06
201 TestMinikubeProfile 100.06
204 TestMountStart/serial/StartWithMountFirst 26.82
205 TestMountStart/serial/VerifyMountFirst 0.43
206 TestMountStart/serial/StartWithMountSecond 25.73
207 TestMountStart/serial/VerifyMountSecond 0.42
208 TestMountStart/serial/DeleteFirst 0.57
209 TestMountStart/serial/VerifyMountPostDelete 0.43
210 TestMountStart/serial/Stop 1.11
211 TestMountStart/serial/RestartStopped 21.05
212 TestMountStart/serial/VerifyMountPostStop 0.43
215 TestMultiNode/serial/FreshStart2Nodes 106.77
216 TestMultiNode/serial/DeployApp2Nodes 4.75
218 TestMultiNode/serial/AddNode 41.47
219 TestMultiNode/serial/MultiNodeLabels 0.07
220 TestMultiNode/serial/ProfileList 0.23
221 TestMultiNode/serial/CopyFile 8.07
222 TestMultiNode/serial/StopNode 3.04
223 TestMultiNode/serial/StartAfterStop 28.51
225 TestMultiNode/serial/DeleteNode 1.52
227 TestMultiNode/serial/RestartMultiNode 447.23
228 TestMultiNode/serial/ValidateNameConflict 51.56
235 TestScheduledStopUnix 116.53
239 TestRunningBinaryUpgrade 244.55
241 TestKubernetesUpgrade 254.71
244 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
252 TestNoKubernetes/serial/StartWithK8s 110.8
260 TestNetworkPlugins/group/false 5.91
265 TestPause/serial/Start 105.63
266 TestNoKubernetes/serial/StartWithStopK8s 62.17
267 TestNoKubernetes/serial/Start 27.45
268 TestPause/serial/SecondStartNoReconfiguration 54.98
269 TestNoKubernetes/serial/VerifyK8sNotRunning 0.23
270 TestNoKubernetes/serial/ProfileList 30.85
271 TestPause/serial/Pause 1.05
272 TestPause/serial/VerifyStatus 0.32
273 TestPause/serial/Unpause 0.9
274 TestNoKubernetes/serial/Stop 1.39
275 TestPause/serial/PauseAgain 1.24
276 TestNoKubernetes/serial/StartNoArgs 22.1
277 TestPause/serial/DeletePaused 2.19
278 TestPause/serial/VerifyDeletedResources 3.64
279 TestStoppedBinaryUpgrade/Setup 0.61
280 TestStoppedBinaryUpgrade/Upgrade 146.97
281 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.24
283 TestStartStop/group/old-k8s-version/serial/FirstStart 157.37
284 TestStoppedBinaryUpgrade/MinikubeLogs 1.33
286 TestStartStop/group/no-preload/serial/FirstStart 120.39
288 TestStartStop/group/embed-certs/serial/FirstStart 93.47
289 TestStartStop/group/embed-certs/serial/DeployApp 10.32
290 TestStartStop/group/no-preload/serial/DeployApp 10.35
291 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.27
293 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.23
295 TestStartStop/group/old-k8s-version/serial/DeployApp 9.41
296 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.92
299 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 64.69
300 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.32
301 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.3
305 TestStartStop/group/embed-certs/serial/SecondStart 693.28
306 TestStartStop/group/no-preload/serial/SecondStart 602
308 TestStartStop/group/old-k8s-version/serial/SecondStart 734.63
310 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 528.43
320 TestStartStop/group/newest-cni/serial/FirstStart 61.19
321 TestNetworkPlugins/group/auto/Start 87.77
322 TestNetworkPlugins/group/kindnet/Start 84.64
323 TestStartStop/group/newest-cni/serial/DeployApp 0
324 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.7
325 TestStartStop/group/newest-cni/serial/Stop 12.19
326 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.27
327 TestStartStop/group/newest-cni/serial/SecondStart 55.08
328 TestNetworkPlugins/group/auto/KubeletFlags 0.26
329 TestNetworkPlugins/group/auto/NetCatPod 14.33
330 TestNetworkPlugins/group/auto/DNS 0.22
331 TestNetworkPlugins/group/auto/Localhost 0.18
332 TestNetworkPlugins/group/auto/HairPin 0.21
333 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
334 TestNetworkPlugins/group/kindnet/KubeletFlags 0.29
335 TestNetworkPlugins/group/kindnet/NetCatPod 13.38
336 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
337 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
338 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.41
339 TestStartStop/group/newest-cni/serial/Pause 3.33
340 TestNetworkPlugins/group/calico/Start 96.8
341 TestNetworkPlugins/group/custom-flannel/Start 116.11
342 TestNetworkPlugins/group/kindnet/DNS 0.22
343 TestNetworkPlugins/group/kindnet/Localhost 0.17
344 TestNetworkPlugins/group/kindnet/HairPin 0.16
345 TestNetworkPlugins/group/enable-default-cni/Start 104.41
346 TestNetworkPlugins/group/flannel/Start 126.26
347 TestNetworkPlugins/group/calico/ControllerPod 6.01
348 TestNetworkPlugins/group/calico/KubeletFlags 0.26
349 TestNetworkPlugins/group/calico/NetCatPod 12.52
350 TestNetworkPlugins/group/calico/DNS 0.24
351 TestNetworkPlugins/group/calico/Localhost 0.19
352 TestNetworkPlugins/group/calico/HairPin 0.19
353 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.27
354 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.33
355 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.29
356 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.31
357 TestNetworkPlugins/group/custom-flannel/DNS 0.22
358 TestNetworkPlugins/group/custom-flannel/Localhost 0.22
359 TestNetworkPlugins/group/custom-flannel/HairPin 0.2
360 TestNetworkPlugins/group/bridge/Start 78.64
361 TestNetworkPlugins/group/enable-default-cni/DNS 0.36
362 TestNetworkPlugins/group/enable-default-cni/Localhost 0.21
363 TestNetworkPlugins/group/enable-default-cni/HairPin 0.18
364 TestNetworkPlugins/group/flannel/ControllerPod 6.01
365 TestNetworkPlugins/group/flannel/KubeletFlags 0.22
366 TestNetworkPlugins/group/flannel/NetCatPod 11.29
367 TestNetworkPlugins/group/flannel/DNS 0.18
368 TestNetworkPlugins/group/flannel/Localhost 0.18
369 TestNetworkPlugins/group/flannel/HairPin 0.17
370 TestNetworkPlugins/group/bridge/KubeletFlags 0.25
371 TestNetworkPlugins/group/bridge/NetCatPod 11.27
372 TestNetworkPlugins/group/bridge/DNS 0.2
373 TestNetworkPlugins/group/bridge/Localhost 0.14
374 TestNetworkPlugins/group/bridge/HairPin 0.15
x
+
TestDownloadOnly/v1.16.0/json-events (6.85s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-795878 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-795878 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (6.848679375s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (6.85s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-795878
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-795878: exit status 85 (86.040105ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-795878 | jenkins | v1.32.0 | 16 Jan 24 02:34 UTC |          |
	|         | -p download-only-795878        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/16 02:34:16
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0116 02:34:16.180705  475489 out.go:296] Setting OutFile to fd 1 ...
	I0116 02:34:16.181016  475489 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:34:16.181028  475489 out.go:309] Setting ErrFile to fd 2...
	I0116 02:34:16.181033  475489 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:34:16.181240  475489 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17965-468241/.minikube/bin
	W0116 02:34:16.181375  475489 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17965-468241/.minikube/config/config.json: open /home/jenkins/minikube-integration/17965-468241/.minikube/config/config.json: no such file or directory
	I0116 02:34:16.182095  475489 out.go:303] Setting JSON to true
	I0116 02:34:16.183077  475489 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":11808,"bootTime":1705360648,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0116 02:34:16.183154  475489 start.go:138] virtualization: kvm guest
	I0116 02:34:16.186218  475489 out.go:97] [download-only-795878] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0116 02:34:16.188180  475489 out.go:169] MINIKUBE_LOCATION=17965
	W0116 02:34:16.186377  475489 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17965-468241/.minikube/cache/preloaded-tarball: no such file or directory
	I0116 02:34:16.186462  475489 notify.go:220] Checking for updates...
	I0116 02:34:16.191806  475489 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 02:34:16.193445  475489 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17965-468241/kubeconfig
	I0116 02:34:16.195046  475489 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17965-468241/.minikube
	I0116 02:34:16.196672  475489 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0116 02:34:16.199613  475489 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0116 02:34:16.199936  475489 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 02:34:16.234179  475489 out.go:97] Using the kvm2 driver based on user configuration
	I0116 02:34:16.234223  475489 start.go:298] selected driver: kvm2
	I0116 02:34:16.234230  475489 start.go:902] validating driver "kvm2" against <nil>
	I0116 02:34:16.234680  475489 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 02:34:16.234805  475489 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17965-468241/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0116 02:34:16.251506  475489 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0116 02:34:16.251605  475489 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0116 02:34:16.252179  475489 start_flags.go:392] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0116 02:34:16.252341  475489 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0116 02:34:16.252406  475489 cni.go:84] Creating CNI manager for ""
	I0116 02:34:16.252416  475489 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0116 02:34:16.252429  475489 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0116 02:34:16.252435  475489 start_flags.go:321] config:
	{Name:download-only-795878 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-795878 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 02:34:16.252692  475489 iso.go:125] acquiring lock: {Name:mk2f3231f0eeeb23816ac363851489181398d1c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 02:34:16.254974  475489 out.go:97] Downloading VM boot image ...
	I0116 02:34:16.255007  475489 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso.sha256 -> /home/jenkins/minikube-integration/17965-468241/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso
	I0116 02:34:18.669823  475489 out.go:97] Starting control plane node download-only-795878 in cluster download-only-795878
	I0116 02:34:18.669883  475489 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0116 02:34:18.691865  475489 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0116 02:34:18.691911  475489 cache.go:56] Caching tarball of preloaded images
	I0116 02:34:18.692088  475489 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0116 02:34:18.694162  475489 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0116 02:34:18.694178  475489 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I0116 02:34:18.722513  475489 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:432b600409d778ea7a21214e83948570 -> /home/jenkins/minikube-integration/17965-468241/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-795878"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.16.0/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-795878
--- PASS: TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (4.93s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-527490 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-527490 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (4.929989729s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (4.93s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-527490
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-527490: exit status 85 (82.219762ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-795878 | jenkins | v1.32.0 | 16 Jan 24 02:34 UTC |                     |
	|         | -p download-only-795878        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.32.0 | 16 Jan 24 02:34 UTC | 16 Jan 24 02:34 UTC |
	| delete  | -p download-only-795878        | download-only-795878 | jenkins | v1.32.0 | 16 Jan 24 02:34 UTC | 16 Jan 24 02:34 UTC |
	| start   | -o=json --download-only        | download-only-527490 | jenkins | v1.32.0 | 16 Jan 24 02:34 UTC |                     |
	|         | -p download-only-527490        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/16 02:34:23
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0116 02:34:23.432931  475654 out.go:296] Setting OutFile to fd 1 ...
	I0116 02:34:23.433206  475654 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:34:23.433215  475654 out.go:309] Setting ErrFile to fd 2...
	I0116 02:34:23.433220  475654 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:34:23.433417  475654 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17965-468241/.minikube/bin
	I0116 02:34:23.434032  475654 out.go:303] Setting JSON to true
	I0116 02:34:23.434979  475654 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":11816,"bootTime":1705360648,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0116 02:34:23.435066  475654 start.go:138] virtualization: kvm guest
	I0116 02:34:23.437728  475654 out.go:97] [download-only-527490] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0116 02:34:23.439596  475654 out.go:169] MINIKUBE_LOCATION=17965
	I0116 02:34:23.437958  475654 notify.go:220] Checking for updates...
	I0116 02:34:23.442742  475654 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 02:34:23.444444  475654 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17965-468241/kubeconfig
	I0116 02:34:23.446250  475654 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17965-468241/.minikube
	I0116 02:34:23.448049  475654 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-527490"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.4/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-527490
--- PASS: TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (4.26s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-084153 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-084153 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (4.260283315s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (4.26s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-084153
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-084153: exit status 85 (80.776634ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-795878 | jenkins | v1.32.0 | 16 Jan 24 02:34 UTC |                     |
	|         | -p download-only-795878           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 16 Jan 24 02:34 UTC | 16 Jan 24 02:34 UTC |
	| delete  | -p download-only-795878           | download-only-795878 | jenkins | v1.32.0 | 16 Jan 24 02:34 UTC | 16 Jan 24 02:34 UTC |
	| start   | -o=json --download-only           | download-only-527490 | jenkins | v1.32.0 | 16 Jan 24 02:34 UTC |                     |
	|         | -p download-only-527490           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 16 Jan 24 02:34 UTC | 16 Jan 24 02:34 UTC |
	| delete  | -p download-only-527490           | download-only-527490 | jenkins | v1.32.0 | 16 Jan 24 02:34 UTC | 16 Jan 24 02:34 UTC |
	| start   | -o=json --download-only           | download-only-084153 | jenkins | v1.32.0 | 16 Jan 24 02:34 UTC |                     |
	|         | -p download-only-084153           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/16 02:34:28
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0116 02:34:28.758328  475807 out.go:296] Setting OutFile to fd 1 ...
	I0116 02:34:28.758623  475807 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:34:28.758634  475807 out.go:309] Setting ErrFile to fd 2...
	I0116 02:34:28.758638  475807 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:34:28.758828  475807 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17965-468241/.minikube/bin
	I0116 02:34:28.759441  475807 out.go:303] Setting JSON to true
	I0116 02:34:28.760372  475807 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":11821,"bootTime":1705360648,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0116 02:34:28.760442  475807 start.go:138] virtualization: kvm guest
	I0116 02:34:28.763033  475807 out.go:97] [download-only-084153] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0116 02:34:28.765013  475807 out.go:169] MINIKUBE_LOCATION=17965
	I0116 02:34:28.763255  475807 notify.go:220] Checking for updates...
	I0116 02:34:28.768852  475807 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 02:34:28.770756  475807 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17965-468241/kubeconfig
	I0116 02:34:28.772323  475807 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17965-468241/.minikube
	I0116 02:34:28.774419  475807 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-084153"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-084153
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.62s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-681946 --alsologtostderr --binary-mirror http://127.0.0.1:35673 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-681946" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-681946
--- PASS: TestBinaryMirror (0.62s)

                                                
                                    
x
+
TestOffline (95.25s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-431037 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-431037 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m34.097141266s)
helpers_test.go:175: Cleaning up "offline-crio-431037" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-431037
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-431037: (1.15263844s)
--- PASS: TestOffline (95.25s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-690916
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-690916: exit status 85 (70.218664ms)

                                                
                                                
-- stdout --
	* Profile "addons-690916" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-690916"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-690916
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-690916: exit status 85 (71.626238ms)

                                                
                                                
-- stdout --
	* Profile "addons-690916" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-690916"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (164.78s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-690916 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-690916 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m44.781935761s)
--- PASS: TestAddons/Setup (164.78s)

                                                
                                    
x
+
TestAddons/parallel/Registry (19.86s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 29.700226ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-lc5z7" [1f13401c-40a4-41b7-978b-4946e00babb5] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.005406106s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-9db6c" [0e95c7bb-1c5d-4a03-9ca4-1c48c5270c4d] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.008213892s
addons_test.go:340: (dbg) Run:  kubectl --context addons-690916 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-690916 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-690916 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (7.761742637s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-amd64 -p addons-690916 ip
2024/01/16 02:37:38 [DEBUG] GET http://192.168.39.234:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p addons-690916 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (19.86s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.47s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-kb7s6" [c418b0dc-db86-4bb9-8ad3-effa2ac33317] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004991s
addons_test.go:841: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-690916
addons_test.go:841: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-690916: (6.467604098s)
--- PASS: TestAddons/parallel/InspektorGadget (12.47s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (7.33s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 4.214211ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-9nqgd" [cc5a4e79-918b-4fdc-934b-8f301e03f744] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.006299563s
addons_test.go:415: (dbg) Run:  kubectl --context addons-690916 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-amd64 -p addons-690916 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:432: (dbg) Done: out/minikube-linux-amd64 -p addons-690916 addons disable metrics-server --alsologtostderr -v=1: (1.233555361s)
--- PASS: TestAddons/parallel/MetricsServer (7.33s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (10.83s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 4.525464ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-8gmxx" [32fe0688-1256-49e9-a768-99e587db34c8] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.009559248s
addons_test.go:473: (dbg) Run:  kubectl --context addons-690916 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-690916 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.934543106s)
addons_test.go:490: (dbg) Run:  out/minikube-linux-amd64 -p addons-690916 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (10.83s)

                                                
                                    
x
+
TestAddons/parallel/CSI (67.12s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 29.922336ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-690916 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-690916 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-690916 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-690916 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-690916 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-690916 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-690916 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-690916 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-690916 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-690916 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-690916 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-690916 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-690916 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-690916 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-690916 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-690916 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-690916 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-690916 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-690916 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-690916 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-690916 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-690916 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [6bacf63e-2f7d-42df-9119-f9f8216b43ca] Pending
helpers_test.go:344: "task-pv-pod" [6bacf63e-2f7d-42df-9119-f9f8216b43ca] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [6bacf63e-2f7d-42df-9119-f9f8216b43ca] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 16.004604948s
addons_test.go:584: (dbg) Run:  kubectl --context addons-690916 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-690916 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-690916 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-690916 delete pod task-pv-pod
addons_test.go:594: (dbg) Done: kubectl --context addons-690916 delete pod task-pv-pod: (1.178948299s)
addons_test.go:600: (dbg) Run:  kubectl --context addons-690916 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-690916 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-690916 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-690916 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-690916 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-690916 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-690916 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-690916 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-690916 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-690916 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-690916 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-690916 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-690916 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-690916 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [8e7d830e-e66d-4ec0-a1c4-3bbc64dabf73] Pending
helpers_test.go:344: "task-pv-pod-restore" [8e7d830e-e66d-4ec0-a1c4-3bbc64dabf73] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [8e7d830e-e66d-4ec0-a1c4-3bbc64dabf73] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 10.005347199s
addons_test.go:626: (dbg) Run:  kubectl --context addons-690916 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-690916 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-690916 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-amd64 -p addons-690916 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-amd64 -p addons-690916 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.899222232s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-amd64 -p addons-690916 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (67.12s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (13.65s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-690916 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-690916 --alsologtostderr -v=1: (1.639344616s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7ddfbb94ff-d8hns" [2821060b-5918-451a-a1f1-1be30e4dc855] Pending
helpers_test.go:344: "headlamp-7ddfbb94ff-d8hns" [2821060b-5918-451a-a1f1-1be30e4dc855] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7ddfbb94ff-d8hns" [2821060b-5918-451a-a1f1-1be30e4dc855] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.005054417s
--- PASS: TestAddons/parallel/Headlamp (13.65s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.73s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-64c8c85f65-zjtb8" [878d5b98-e1cb-4f9a-b35e-6303ba9cfd67] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004420984s
addons_test.go:860: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-690916
--- PASS: TestAddons/parallel/CloudSpanner (5.73s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.57s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-690916 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-690916 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-690916 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-690916 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-690916 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-690916 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-690916 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-690916 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [14c7c812-44d7-4a69-83bc-55caf95cc79c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [14c7c812-44d7-4a69-83bc-55caf95cc79c] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [14c7c812-44d7-4a69-83bc-55caf95cc79c] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.004831221s
addons_test.go:891: (dbg) Run:  kubectl --context addons-690916 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-amd64 -p addons-690916 ssh "cat /opt/local-path-provisioner/pvc-e5d0b07b-ee14-47a2-bd87-8d60dd23d5f0_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-690916 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-690916 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-amd64 -p addons-690916 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-amd64 -p addons-690916 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.625941688s)
--- PASS: TestAddons/parallel/LocalPath (53.57s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.61s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-p8gsf" [3df66016-9e6d-4756-b004-b80a4bab9fad] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.005903682s
addons_test.go:955: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-690916
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.61s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-7d5hp" [751a0dad-e5ce-44e0-888c-bf7e74f9e70e] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004650218s
--- PASS: TestAddons/parallel/Yakd (6.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-690916 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-690916 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestCertOptions (61.29s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-977008 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-977008 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (59.851084226s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-977008 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-977008 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-977008 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-977008" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-977008
--- PASS: TestCertOptions (61.29s)

                                                
                                    
x
+
TestCertExpiration (306.91s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-690771 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-690771 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m25.447989741s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-690771 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-690771 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (40.554841627s)
helpers_test.go:175: Cleaning up "cert-expiration-690771" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-690771
--- PASS: TestCertExpiration (306.91s)

                                                
                                    
x
+
TestForceSystemdFlag (116.2s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-614852 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E0116 03:31:49.161376  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/functional-193417/client.crt: no such file or directory
E0116 03:32:19.245870  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/addons-690916/client.crt: no such file or directory
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-614852 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m54.945418252s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-614852 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-614852" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-614852
--- PASS: TestForceSystemdFlag (116.20s)

                                                
                                    
x
+
TestForceSystemdEnv (52.51s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-245335 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-245335 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (51.808990443s)
helpers_test.go:175: Cleaning up "force-systemd-env-245335" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-245335
--- PASS: TestForceSystemdEnv (52.51s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.21s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (1.21s)

                                                
                                    
x
+
TestErrorSpam/setup (46.43s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-023417 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-023417 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-023417 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-023417 --driver=kvm2  --container-runtime=crio: (46.42490056s)
--- PASS: TestErrorSpam/setup (46.43s)

                                                
                                    
x
+
TestErrorSpam/start (0.43s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-023417 --log_dir /tmp/nospam-023417 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-023417 --log_dir /tmp/nospam-023417 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-023417 --log_dir /tmp/nospam-023417 start --dry-run
--- PASS: TestErrorSpam/start (0.43s)

                                                
                                    
x
+
TestErrorSpam/status (0.84s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-023417 --log_dir /tmp/nospam-023417 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-023417 --log_dir /tmp/nospam-023417 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-023417 --log_dir /tmp/nospam-023417 status
--- PASS: TestErrorSpam/status (0.84s)

                                                
                                    
x
+
TestErrorSpam/pause (1.68s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-023417 --log_dir /tmp/nospam-023417 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-023417 --log_dir /tmp/nospam-023417 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-023417 --log_dir /tmp/nospam-023417 pause
--- PASS: TestErrorSpam/pause (1.68s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.76s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-023417 --log_dir /tmp/nospam-023417 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-023417 --log_dir /tmp/nospam-023417 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-023417 --log_dir /tmp/nospam-023417 unpause
--- PASS: TestErrorSpam/unpause (1.76s)

                                                
                                    
x
+
TestErrorSpam/stop (2.28s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-023417 --log_dir /tmp/nospam-023417 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-023417 --log_dir /tmp/nospam-023417 stop: (2.100766337s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-023417 --log_dir /tmp/nospam-023417 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-023417 --log_dir /tmp/nospam-023417 stop
--- PASS: TestErrorSpam/stop (2.28s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1854: local sync path: /home/jenkins/minikube-integration/17965-468241/.minikube/files/etc/test/nested/copy/475478/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (72.1s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2233: (dbg) Run:  out/minikube-linux-amd64 start -p functional-193417 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2233: (dbg) Done: out/minikube-linux-amd64 start -p functional-193417 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m12.101370159s)
--- PASS: TestFunctional/serial/StartWithProxy (72.10s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (36.47s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-193417 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-193417 --alsologtostderr -v=8: (36.472179372s)
functional_test.go:659: soft start took 36.472919533s for "functional-193417" cluster.
--- PASS: TestFunctional/serial/SoftStart (36.47s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-193417 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.44s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-193417 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-193417 cache add registry.k8s.io/pause:3.1: (1.086431972s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-193417 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-193417 cache add registry.k8s.io/pause:3.3: (1.098601578s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-193417 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-193417 cache add registry.k8s.io/pause:latest: (1.254480434s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.44s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-193417 /tmp/TestFunctionalserialCacheCmdcacheadd_local4022943832/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-193417 cache add minikube-local-cache-test:functional-193417
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-193417 cache delete minikube-local-cache-test:functional-193417
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-193417
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.16s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-193417 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.91s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-193417 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-193417 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-193417 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (260.309777ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-193417 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-amd64 -p functional-193417 cache reload: (1.07430284s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-193417 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.91s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-193417 kubectl -- --context functional-193417 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-193417 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (36.71s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-193417 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-193417 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (36.708821725s)
functional_test.go:757: restart took 36.708962093s for "functional-193417" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (36.71s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-193417 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.64s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-193417 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-193417 logs: (1.641186005s)
--- PASS: TestFunctional/serial/LogsCmd (1.64s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.64s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-193417 logs --file /tmp/TestFunctionalserialLogsFileCmd2511295040/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-193417 logs --file /tmp/TestFunctionalserialLogsFileCmd2511295040/001/logs.txt: (1.636219007s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.64s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.28s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2320: (dbg) Run:  kubectl --context functional-193417 apply -f testdata/invalidsvc.yaml
functional_test.go:2334: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-193417
functional_test.go:2334: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-193417: exit status 115 (331.913083ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.50.41:32343 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2326: (dbg) Run:  kubectl --context functional-193417 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.28s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-193417 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-193417 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-193417 config get cpus: exit status 14 (75.087909ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-193417 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-193417 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-193417 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-193417 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-193417 config get cpus: exit status 14 (78.009926ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (30.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-193417 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-193417 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 482347: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (30.22s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-193417 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-193417 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (180.500295ms)

                                                
                                                
-- stdout --
	* [functional-193417] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17965
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17965-468241/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17965-468241/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0116 02:46:51.901202  482147 out.go:296] Setting OutFile to fd 1 ...
	I0116 02:46:51.901375  482147 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:46:51.901411  482147 out.go:309] Setting ErrFile to fd 2...
	I0116 02:46:51.901421  482147 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:46:51.901856  482147 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17965-468241/.minikube/bin
	I0116 02:46:51.902677  482147 out.go:303] Setting JSON to false
	I0116 02:46:51.904143  482147 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":12564,"bootTime":1705360648,"procs":241,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0116 02:46:51.904236  482147 start.go:138] virtualization: kvm guest
	I0116 02:46:51.906751  482147 out.go:177] * [functional-193417] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0116 02:46:51.908603  482147 notify.go:220] Checking for updates...
	I0116 02:46:51.908622  482147 out.go:177]   - MINIKUBE_LOCATION=17965
	I0116 02:46:51.910493  482147 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 02:46:51.912279  482147 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17965-468241/kubeconfig
	I0116 02:46:51.913973  482147 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17965-468241/.minikube
	I0116 02:46:51.915496  482147 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0116 02:46:51.916890  482147 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0116 02:46:51.918821  482147 config.go:182] Loaded profile config "functional-193417": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 02:46:51.919382  482147 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:46:51.919458  482147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:46:51.935482  482147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40677
	I0116 02:46:51.935978  482147 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:46:51.936647  482147 main.go:141] libmachine: Using API Version  1
	I0116 02:46:51.936678  482147 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:46:51.937099  482147 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:46:51.937323  482147 main.go:141] libmachine: (functional-193417) Calling .DriverName
	I0116 02:46:51.937627  482147 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 02:46:51.938109  482147 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:46:51.938168  482147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:46:51.955092  482147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44283
	I0116 02:46:51.955636  482147 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:46:51.956273  482147 main.go:141] libmachine: Using API Version  1
	I0116 02:46:51.956309  482147 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:46:51.956661  482147 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:46:51.956871  482147 main.go:141] libmachine: (functional-193417) Calling .DriverName
	I0116 02:46:51.993430  482147 out.go:177] * Using the kvm2 driver based on existing profile
	I0116 02:46:51.995021  482147 start.go:298] selected driver: kvm2
	I0116 02:46:51.995043  482147 start.go:902] validating driver "kvm2" against &{Name:functional-193417 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:functional-193417 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.50.41 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertEx
piration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 02:46:51.995229  482147 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0116 02:46:51.997812  482147 out.go:177] 
	W0116 02:46:51.999282  482147 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0116 02:46:52.000672  482147 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-193417 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-193417 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-193417 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (184.407773ms)

                                                
                                                
-- stdout --
	* [functional-193417] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17965
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17965-468241/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17965-468241/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0116 02:46:50.634134  481878 out.go:296] Setting OutFile to fd 1 ...
	I0116 02:46:50.634407  481878 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:46:50.634423  481878 out.go:309] Setting ErrFile to fd 2...
	I0116 02:46:50.634431  481878 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:46:50.635073  481878 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17965-468241/.minikube/bin
	I0116 02:46:50.636236  481878 out.go:303] Setting JSON to false
	I0116 02:46:50.637693  481878 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":12563,"bootTime":1705360648,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0116 02:46:50.637795  481878 start.go:138] virtualization: kvm guest
	I0116 02:46:50.640575  481878 out.go:177] * [functional-193417] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	I0116 02:46:50.642661  481878 out.go:177]   - MINIKUBE_LOCATION=17965
	I0116 02:46:50.642717  481878 notify.go:220] Checking for updates...
	I0116 02:46:50.646195  481878 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 02:46:50.647983  481878 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17965-468241/kubeconfig
	I0116 02:46:50.649691  481878 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17965-468241/.minikube
	I0116 02:46:50.651283  481878 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0116 02:46:50.652770  481878 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0116 02:46:50.654970  481878 config.go:182] Loaded profile config "functional-193417": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 02:46:50.655380  481878 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:46:50.655447  481878 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:46:50.673034  481878 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38961
	I0116 02:46:50.673481  481878 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:46:50.674126  481878 main.go:141] libmachine: Using API Version  1
	I0116 02:46:50.674141  481878 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:46:50.674555  481878 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:46:50.674752  481878 main.go:141] libmachine: (functional-193417) Calling .DriverName
	I0116 02:46:50.675009  481878 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 02:46:50.675358  481878 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:46:50.675407  481878 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:46:50.693367  481878 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44853
	I0116 02:46:50.693851  481878 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:46:50.694404  481878 main.go:141] libmachine: Using API Version  1
	I0116 02:46:50.694428  481878 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:46:50.694751  481878 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:46:50.695035  481878 main.go:141] libmachine: (functional-193417) Calling .DriverName
	I0116 02:46:50.734695  481878 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0116 02:46:50.736211  481878 start.go:298] selected driver: kvm2
	I0116 02:46:50.736233  481878 start.go:902] validating driver "kvm2" against &{Name:functional-193417 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:functional-193417 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.50.41 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertEx
piration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 02:46:50.736374  481878 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0116 02:46:50.738800  481878 out.go:177] 
	W0116 02:46:50.740182  481878 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0116 02:46:50.741988  481878 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-193417 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-193417 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-193417 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-193417 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-193417 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-99xlk" [45a4de4e-3bf9-421e-b3c1-b88e1d1901f3] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-99xlk" [45a4de4e-3bf9-421e-b3c1-b88e1d1901f3] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.059705281s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-193417 service hello-node-connect --url
E0116 02:47:19.245982  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/addons-690916/client.crt: no such file or directory
E0116 02:47:19.252098  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/addons-690916/client.crt: no such file or directory
E0116 02:47:19.262409  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/addons-690916/client.crt: no such file or directory
E0116 02:47:19.282768  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/addons-690916/client.crt: no such file or directory
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.50.41:31548
functional_test.go:1674: http://192.168.50.41:31548: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-99xlk

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.50.41:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.50.41:31548
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.02s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-193417 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-193417 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (33.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [3f270232-ced3-43fd-be8c-e5a0c3e72492] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.023791525s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-193417 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-193417 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-193417 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-193417 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-193417 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [7aba68c6-4606-42a7-8d61-61fa79122dc8] Pending
helpers_test.go:344: "sp-pod" [7aba68c6-4606-42a7-8d61-61fa79122dc8] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [7aba68c6-4606-42a7-8d61-61fa79122dc8] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.004921194s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-193417 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-193417 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-193417 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [880250e7-7ae1-40d3-bf26-4f8bde8c3cfb] Pending
helpers_test.go:344: "sp-pod" [880250e7-7ae1-40d3-bf26-4f8bde8c3cfb] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [880250e7-7ae1-40d3-bf26-4f8bde8c3cfb] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.004785579s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-193417 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (33.76s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-193417 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-193417 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-193417 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-193417 ssh -n functional-193417 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-193417 cp functional-193417:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3708973124/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-193417 ssh -n functional-193417 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-193417 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-193417 ssh -n functional-193417 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.63s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (29.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: (dbg) Run:  kubectl --context functional-193417 replace --force -f testdata/mysql.yaml
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-qsjqf" [93d283f8-2831-41a5-ba46-79458a1c3934] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-qsjqf" [93d283f8-2831-41a5-ba46-79458a1c3934] Running
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 25.006620853s
functional_test.go:1806: (dbg) Run:  kubectl --context functional-193417 exec mysql-859648c796-qsjqf -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-193417 exec mysql-859648c796-qsjqf -- mysql -ppassword -e "show databases;": exit status 1 (393.179156ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-193417 exec mysql-859648c796-qsjqf -- mysql -ppassword -e "show databases;"
E0116 02:47:19.885866  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/addons-690916/client.crt: no such file or directory
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-193417 exec mysql-859648c796-qsjqf -- mysql -ppassword -e "show databases;": exit status 1 (367.180552ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0116 02:47:20.526847  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/addons-690916/client.crt: no such file or directory
functional_test.go:1806: (dbg) Run:  kubectl --context functional-193417 exec mysql-859648c796-qsjqf -- mysql -ppassword -e "show databases;"
2024/01/16 02:47:22 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/MySQL (29.98s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1928: Checking for existence of /etc/test/nested/copy/475478/hosts within VM
functional_test.go:1930: (dbg) Run:  out/minikube-linux-amd64 -p functional-193417 ssh "sudo cat /etc/test/nested/copy/475478/hosts"
functional_test.go:1935: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1971: Checking for existence of /etc/ssl/certs/475478.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-193417 ssh "sudo cat /etc/ssl/certs/475478.pem"
functional_test.go:1971: Checking for existence of /usr/share/ca-certificates/475478.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-193417 ssh "sudo cat /usr/share/ca-certificates/475478.pem"
functional_test.go:1971: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-193417 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/4754782.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-193417 ssh "sudo cat /etc/ssl/certs/4754782.pem"
functional_test.go:1998: Checking for existence of /usr/share/ca-certificates/4754782.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-193417 ssh "sudo cat /usr/share/ca-certificates/4754782.pem"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-193417 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.85s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-193417 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2026: (dbg) Run:  out/minikube-linux-amd64 -p functional-193417 ssh "sudo systemctl is-active docker"
E0116 02:47:19.404452  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/addons-690916/client.crt: no such file or directory
E0116 02:47:19.564921  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/addons-690916/client.crt: no such file or directory
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-193417 ssh "sudo systemctl is-active docker": exit status 1 (242.480783ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2026: (dbg) Run:  out/minikube-linux-amd64 -p functional-193417 ssh "sudo systemctl is-active containerd"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-193417 ssh "sudo systemctl is-active containerd": exit status 1 (297.353175ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2287: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (14.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-193417 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-193417 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-rnngd" [fb729913-0682-4289-9bd0-826d3347c211] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-rnngd" [fb729913-0682-4289-9bd0-826d3347c211] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 14.004577252s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (14.23s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (15.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-193417 /tmp/TestFunctionalparallelMountCmdany-port1052402991/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1705373209438721039" to /tmp/TestFunctionalparallelMountCmdany-port1052402991/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1705373209438721039" to /tmp/TestFunctionalparallelMountCmdany-port1052402991/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1705373209438721039" to /tmp/TestFunctionalparallelMountCmdany-port1052402991/001/test-1705373209438721039
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-193417 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-193417 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (276.573768ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-193417 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-193417 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan 16 02:46 created-by-test
-rw-r--r-- 1 docker docker 24 Jan 16 02:46 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan 16 02:46 test-1705373209438721039
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-193417 ssh cat /mount-9p/test-1705373209438721039
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-193417 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [51dcbad9-e2b4-4d1b-8abc-eef51b43266f] Pending
helpers_test.go:344: "busybox-mount" [51dcbad9-e2b4-4d1b-8abc-eef51b43266f] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [51dcbad9-e2b4-4d1b-8abc-eef51b43266f] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [51dcbad9-e2b4-4d1b-8abc-eef51b43266f] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 12.00767158s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-193417 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-193417 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-193417 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-193417 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-193417 /tmp/TestFunctionalparallelMountCmdany-port1052402991/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (15.22s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "261.524636ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "65.973144ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "254.747462ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "71.564704ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-193417 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-193417 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-193417 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-193417 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-193417 service list -o json
functional_test.go:1493: Took "924.916641ms" to run "out/minikube-linux-amd64 -p functional-193417 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-193417 /tmp/TestFunctionalparallelMountCmdspecific-port1220536284/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-193417 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-193417 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (260.339921ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-193417 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-193417 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-193417 /tmp/TestFunctionalparallelMountCmdspecific-port1220536284/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-193417 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-193417 ssh "sudo umount -f /mount-9p": exit status 1 (263.421962ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-193417 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-193417 /tmp/TestFunctionalparallelMountCmdspecific-port1220536284/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.23s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-193417 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.50.41:31501
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-193417 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-193417 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.50.41:31501
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-193417 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4035302076/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-193417 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4035302076/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-193417 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4035302076/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-193417 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-193417 ssh "findmnt -T" /mount1: exit status 1 (341.573678ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-193417 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-193417 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-193417 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-193417 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-193417 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4035302076/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-193417 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4035302076/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-193417 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4035302076/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.72s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2255: (dbg) Run:  out/minikube-linux-amd64 -p functional-193417 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2269: (dbg) Run:  out/minikube-linux-amd64 -p functional-193417 version -o=json --components
E0116 02:47:24.367684  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/addons-690916/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/Version/components (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-193417 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-193417 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
localhost/minikube-local-cache-test:functional-193417
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-193417
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-193417 image ls --format short --alsologtostderr:
I0116 02:47:38.935658  483870 out.go:296] Setting OutFile to fd 1 ...
I0116 02:47:38.935777  483870 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 02:47:38.935785  483870 out.go:309] Setting ErrFile to fd 2...
I0116 02:47:38.935790  483870 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 02:47:38.936001  483870 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17965-468241/.minikube/bin
I0116 02:47:38.936659  483870 config.go:182] Loaded profile config "functional-193417": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0116 02:47:38.936780  483870 config.go:182] Loaded profile config "functional-193417": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0116 02:47:38.937212  483870 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0116 02:47:38.937274  483870 main.go:141] libmachine: Launching plugin server for driver kvm2
I0116 02:47:38.952181  483870 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45907
I0116 02:47:38.952836  483870 main.go:141] libmachine: () Calling .GetVersion
I0116 02:47:38.953578  483870 main.go:141] libmachine: Using API Version  1
I0116 02:47:38.953612  483870 main.go:141] libmachine: () Calling .SetConfigRaw
I0116 02:47:38.954006  483870 main.go:141] libmachine: () Calling .GetMachineName
I0116 02:47:38.954172  483870 main.go:141] libmachine: (functional-193417) Calling .GetState
I0116 02:47:38.955946  483870 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0116 02:47:38.956086  483870 main.go:141] libmachine: Launching plugin server for driver kvm2
I0116 02:47:38.969746  483870 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40063
I0116 02:47:38.970367  483870 main.go:141] libmachine: () Calling .GetVersion
I0116 02:47:38.971071  483870 main.go:141] libmachine: Using API Version  1
I0116 02:47:38.971087  483870 main.go:141] libmachine: () Calling .SetConfigRaw
I0116 02:47:38.971399  483870 main.go:141] libmachine: () Calling .GetMachineName
I0116 02:47:38.971578  483870 main.go:141] libmachine: (functional-193417) Calling .DriverName
I0116 02:47:38.971767  483870 ssh_runner.go:195] Run: systemctl --version
I0116 02:47:38.971800  483870 main.go:141] libmachine: (functional-193417) Calling .GetSSHHostname
I0116 02:47:38.975257  483870 main.go:141] libmachine: (functional-193417) DBG | domain functional-193417 has defined MAC address 52:54:00:30:e3:4c in network mk-functional-193417
I0116 02:47:38.975611  483870 main.go:141] libmachine: (functional-193417) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:e3:4c", ip: ""} in network mk-functional-193417: {Iface:virbr1 ExpiryTime:2024-01-16 03:44:24 +0000 UTC Type:0 Mac:52:54:00:30:e3:4c Iaid: IPaddr:192.168.50.41 Prefix:24 Hostname:functional-193417 Clientid:01:52:54:00:30:e3:4c}
I0116 02:47:38.975636  483870 main.go:141] libmachine: (functional-193417) DBG | domain functional-193417 has defined IP address 192.168.50.41 and MAC address 52:54:00:30:e3:4c in network mk-functional-193417
I0116 02:47:38.975799  483870 main.go:141] libmachine: (functional-193417) Calling .GetSSHPort
I0116 02:47:38.976004  483870 main.go:141] libmachine: (functional-193417) Calling .GetSSHKeyPath
I0116 02:47:38.976204  483870 main.go:141] libmachine: (functional-193417) Calling .GetSSHUsername
I0116 02:47:38.976363  483870 sshutil.go:53] new ssh client: &{IP:192.168.50.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/functional-193417/id_rsa Username:docker}
I0116 02:47:39.072106  483870 ssh_runner.go:195] Run: sudo crictl images --output json
I0116 02:47:39.155560  483870 main.go:141] libmachine: Making call to close driver server
I0116 02:47:39.155582  483870 main.go:141] libmachine: (functional-193417) Calling .Close
I0116 02:47:39.155922  483870 main.go:141] libmachine: Successfully made call to close driver server
I0116 02:47:39.155943  483870 main.go:141] libmachine: Making call to close connection to plugin binary
I0116 02:47:39.155960  483870 main.go:141] libmachine: Making call to close driver server
I0116 02:47:39.155973  483870 main.go:141] libmachine: (functional-193417) Calling .Close
I0116 02:47:39.156291  483870 main.go:141] libmachine: Successfully made call to close driver server
I0116 02:47:39.156306  483870 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-193417 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-193417 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/coredns/coredns         | v1.10.1            | ead0a4a53df89 | 53.6MB |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| docker.io/library/nginx                 | latest             | a8758716bb6aa | 191MB  |
| gcr.io/google-containers/addon-resizer  | functional-193417  | ffd4cfbbe753e | 34.1MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/kube-apiserver          | v1.28.4            | 7fe0e6f37db33 | 127MB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | c7d1297425461 | 65.3MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| localhost/minikube-local-cache-test     | functional-193417  | f6c65fcd59fb2 | 3.35kB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-controller-manager | v1.28.4            | d058aa5ab969c | 123MB  |
| registry.k8s.io/kube-proxy              | v1.28.4            | 83f6cc407eed8 | 74.7MB |
| registry.k8s.io/kube-scheduler          | v1.28.4            | e3db313c6dbc0 | 61.6MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/etcd                    | 3.5.9-0            | 73deb9a3f7025 | 295MB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-193417 image ls --format table --alsologtostderr:
I0116 02:47:39.253829  483965 out.go:296] Setting OutFile to fd 1 ...
I0116 02:47:39.254003  483965 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 02:47:39.254017  483965 out.go:309] Setting ErrFile to fd 2...
I0116 02:47:39.254025  483965 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 02:47:39.254248  483965 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17965-468241/.minikube/bin
I0116 02:47:39.254878  483965 config.go:182] Loaded profile config "functional-193417": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0116 02:47:39.254995  483965 config.go:182] Loaded profile config "functional-193417": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0116 02:47:39.255373  483965 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0116 02:47:39.255439  483965 main.go:141] libmachine: Launching plugin server for driver kvm2
I0116 02:47:39.270926  483965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45973
I0116 02:47:39.271478  483965 main.go:141] libmachine: () Calling .GetVersion
I0116 02:47:39.272092  483965 main.go:141] libmachine: Using API Version  1
I0116 02:47:39.272123  483965 main.go:141] libmachine: () Calling .SetConfigRaw
I0116 02:47:39.272591  483965 main.go:141] libmachine: () Calling .GetMachineName
I0116 02:47:39.272854  483965 main.go:141] libmachine: (functional-193417) Calling .GetState
I0116 02:47:39.275112  483965 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0116 02:47:39.275182  483965 main.go:141] libmachine: Launching plugin server for driver kvm2
I0116 02:47:39.291329  483965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44865
I0116 02:47:39.291782  483965 main.go:141] libmachine: () Calling .GetVersion
I0116 02:47:39.292312  483965 main.go:141] libmachine: Using API Version  1
I0116 02:47:39.292337  483965 main.go:141] libmachine: () Calling .SetConfigRaw
I0116 02:47:39.292746  483965 main.go:141] libmachine: () Calling .GetMachineName
I0116 02:47:39.292961  483965 main.go:141] libmachine: (functional-193417) Calling .DriverName
I0116 02:47:39.293204  483965 ssh_runner.go:195] Run: systemctl --version
I0116 02:47:39.293242  483965 main.go:141] libmachine: (functional-193417) Calling .GetSSHHostname
I0116 02:47:39.296222  483965 main.go:141] libmachine: (functional-193417) DBG | domain functional-193417 has defined MAC address 52:54:00:30:e3:4c in network mk-functional-193417
I0116 02:47:39.296653  483965 main.go:141] libmachine: (functional-193417) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:e3:4c", ip: ""} in network mk-functional-193417: {Iface:virbr1 ExpiryTime:2024-01-16 03:44:24 +0000 UTC Type:0 Mac:52:54:00:30:e3:4c Iaid: IPaddr:192.168.50.41 Prefix:24 Hostname:functional-193417 Clientid:01:52:54:00:30:e3:4c}
I0116 02:47:39.296686  483965 main.go:141] libmachine: (functional-193417) DBG | domain functional-193417 has defined IP address 192.168.50.41 and MAC address 52:54:00:30:e3:4c in network mk-functional-193417
I0116 02:47:39.296829  483965 main.go:141] libmachine: (functional-193417) Calling .GetSSHPort
I0116 02:47:39.297066  483965 main.go:141] libmachine: (functional-193417) Calling .GetSSHKeyPath
I0116 02:47:39.297226  483965 main.go:141] libmachine: (functional-193417) Calling .GetSSHUsername
I0116 02:47:39.297415  483965 sshutil.go:53] new ssh client: &{IP:192.168.50.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/functional-193417/id_rsa Username:docker}
I0116 02:47:39.398871  483965 ssh_runner.go:195] Run: sudo crictl images --output json
I0116 02:47:39.487770  483965 main.go:141] libmachine: Making call to close driver server
I0116 02:47:39.487806  483965 main.go:141] libmachine: (functional-193417) Calling .Close
I0116 02:47:39.488174  483965 main.go:141] libmachine: Successfully made call to close driver server
I0116 02:47:39.488200  483965 main.go:141] libmachine: Making call to close connection to plugin binary
I0116 02:47:39.488203  483965 main.go:141] libmachine: (functional-193417) DBG | Closing plugin on server side
I0116 02:47:39.488216  483965 main.go:141] libmachine: Making call to close driver server
I0116 02:47:39.488227  483965 main.go:141] libmachine: (functional-193417) Calling .Close
I0116 02:47:39.488488  483965 main.go:141] libmachine: Successfully made call to close driver server
I0116 02:47:39.488505  483965 main.go:141] libmachine: Making call to close connection to plugin binary
I0116 02:47:39.488519  483965 main.go:141] libmachine: (functional-193417) DBG | Closing plugin on server side
E0116 02:47:39.729414  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/addons-690916/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-193417 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-193417 image ls --format json --alsologtostderr:
[{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":
["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":["registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499","registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"127226832"},{"id":"e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":["registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba","registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"61551410"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io
/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"65258016"},{"id":"f6c65fcd59fb2779dad9828f201fae2c4fd5880d4e6561a355e6e6c949038780","repoDigests":["localhost/minikube-local-cache-test@sha256:83
89a862b999a594459a5360b445d32de018a039764e4948cbf79f48d70e449e"],"repoTags":["localhost/minikube-local-cache-test:functional-193417"],"size":"3345"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":["registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"295456551"},{"id":"a8758716bb6aa4d90071160d27028fe4eaee7ce8166221a97d30440c8eac2be6","repoDigests":["docker.io/library/nginx@sha256:161ef4b1bf7effb350a2a9625cb2b59f69d54ec6059a8a155a1438d0439c593c","docker.io/library/nginx@sha256:4c0fdaa8b6341bfdeca5f18f7837462c80cff90527ee35ef185571e1c32
7beac"],"repoTags":["docker.io/library/nginx:latest"],"size":"190867606"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-193417"],"size":"34114467"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e","registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd5543
2d83a378"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53621675"},{"id":"d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c","registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"123261750"},{"id":"83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":["registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e","registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"74749335"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f58
8b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-193417 image ls --format json --alsologtostderr:
I0116 02:47:39.238829  483956 out.go:296] Setting OutFile to fd 1 ...
I0116 02:47:39.239113  483956 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 02:47:39.239125  483956 out.go:309] Setting ErrFile to fd 2...
I0116 02:47:39.239133  483956 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 02:47:39.239350  483956 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17965-468241/.minikube/bin
I0116 02:47:39.240008  483956 config.go:182] Loaded profile config "functional-193417": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0116 02:47:39.240164  483956 config.go:182] Loaded profile config "functional-193417": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0116 02:47:39.240657  483956 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0116 02:47:39.240717  483956 main.go:141] libmachine: Launching plugin server for driver kvm2
I0116 02:47:39.258479  483956 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33453
I0116 02:47:39.258974  483956 main.go:141] libmachine: () Calling .GetVersion
I0116 02:47:39.259646  483956 main.go:141] libmachine: Using API Version  1
I0116 02:47:39.259673  483956 main.go:141] libmachine: () Calling .SetConfigRaw
I0116 02:47:39.260008  483956 main.go:141] libmachine: () Calling .GetMachineName
I0116 02:47:39.260241  483956 main.go:141] libmachine: (functional-193417) Calling .GetState
I0116 02:47:39.262282  483956 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0116 02:47:39.262336  483956 main.go:141] libmachine: Launching plugin server for driver kvm2
I0116 02:47:39.278212  483956 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45659
I0116 02:47:39.278818  483956 main.go:141] libmachine: () Calling .GetVersion
I0116 02:47:39.279423  483956 main.go:141] libmachine: Using API Version  1
I0116 02:47:39.279455  483956 main.go:141] libmachine: () Calling .SetConfigRaw
I0116 02:47:39.280030  483956 main.go:141] libmachine: () Calling .GetMachineName
I0116 02:47:39.280249  483956 main.go:141] libmachine: (functional-193417) Calling .DriverName
I0116 02:47:39.280492  483956 ssh_runner.go:195] Run: systemctl --version
I0116 02:47:39.280532  483956 main.go:141] libmachine: (functional-193417) Calling .GetSSHHostname
I0116 02:47:39.284076  483956 main.go:141] libmachine: (functional-193417) DBG | domain functional-193417 has defined MAC address 52:54:00:30:e3:4c in network mk-functional-193417
I0116 02:47:39.284524  483956 main.go:141] libmachine: (functional-193417) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:e3:4c", ip: ""} in network mk-functional-193417: {Iface:virbr1 ExpiryTime:2024-01-16 03:44:24 +0000 UTC Type:0 Mac:52:54:00:30:e3:4c Iaid: IPaddr:192.168.50.41 Prefix:24 Hostname:functional-193417 Clientid:01:52:54:00:30:e3:4c}
I0116 02:47:39.284544  483956 main.go:141] libmachine: (functional-193417) DBG | domain functional-193417 has defined IP address 192.168.50.41 and MAC address 52:54:00:30:e3:4c in network mk-functional-193417
I0116 02:47:39.284744  483956 main.go:141] libmachine: (functional-193417) Calling .GetSSHPort
I0116 02:47:39.284990  483956 main.go:141] libmachine: (functional-193417) Calling .GetSSHKeyPath
I0116 02:47:39.285165  483956 main.go:141] libmachine: (functional-193417) Calling .GetSSHUsername
I0116 02:47:39.285321  483956 sshutil.go:53] new ssh client: &{IP:192.168.50.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/functional-193417/id_rsa Username:docker}
I0116 02:47:39.382906  483956 ssh_runner.go:195] Run: sudo crictl images --output json
I0116 02:47:39.436162  483956 main.go:141] libmachine: Making call to close driver server
I0116 02:47:39.436180  483956 main.go:141] libmachine: (functional-193417) Calling .Close
I0116 02:47:39.436695  483956 main.go:141] libmachine: (functional-193417) DBG | Closing plugin on server side
I0116 02:47:39.436704  483956 main.go:141] libmachine: Successfully made call to close driver server
I0116 02:47:39.436722  483956 main.go:141] libmachine: Making call to close connection to plugin binary
I0116 02:47:39.436740  483956 main.go:141] libmachine: Making call to close driver server
I0116 02:47:39.436749  483956 main.go:141] libmachine: (functional-193417) Calling .Close
I0116 02:47:39.437116  483956 main.go:141] libmachine: (functional-193417) DBG | Closing plugin on server side
I0116 02:47:39.437116  483956 main.go:141] libmachine: Successfully made call to close driver server
I0116 02:47:39.437144  483956 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-193417 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-193417 image ls --format yaml --alsologtostderr:
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests:
- registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "74749335"
- id: e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
- registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "61551410"
- id: 7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "127226832"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "65258016"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: f6c65fcd59fb2779dad9828f201fae2c4fd5880d4e6561a355e6e6c949038780
repoDigests:
- localhost/minikube-local-cache-test@sha256:8389a862b999a594459a5360b445d32de018a039764e4948cbf79f48d70e449e
repoTags:
- localhost/minikube-local-cache-test:functional-193417
size: "3345"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests:
- registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "295456551"
- id: d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
- registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "123261750"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: a8758716bb6aa4d90071160d27028fe4eaee7ce8166221a97d30440c8eac2be6
repoDigests:
- docker.io/library/nginx@sha256:161ef4b1bf7effb350a2a9625cb2b59f69d54ec6059a8a155a1438d0439c593c
- docker.io/library/nginx@sha256:4c0fdaa8b6341bfdeca5f18f7837462c80cff90527ee35ef185571e1c327beac
repoTags:
- docker.io/library/nginx:latest
size: "190867606"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-193417
size: "34114467"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
- registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53621675"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-193417 image ls --format yaml --alsologtostderr:
I0116 02:47:38.933422  483868 out.go:296] Setting OutFile to fd 1 ...
I0116 02:47:38.933537  483868 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 02:47:38.933546  483868 out.go:309] Setting ErrFile to fd 2...
I0116 02:47:38.933551  483868 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 02:47:38.933753  483868 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17965-468241/.minikube/bin
I0116 02:47:38.934376  483868 config.go:182] Loaded profile config "functional-193417": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0116 02:47:38.934493  483868 config.go:182] Loaded profile config "functional-193417": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0116 02:47:38.934949  483868 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0116 02:47:38.935009  483868 main.go:141] libmachine: Launching plugin server for driver kvm2
I0116 02:47:38.950032  483868 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34845
I0116 02:47:38.950532  483868 main.go:141] libmachine: () Calling .GetVersion
I0116 02:47:38.951130  483868 main.go:141] libmachine: Using API Version  1
I0116 02:47:38.951154  483868 main.go:141] libmachine: () Calling .SetConfigRaw
I0116 02:47:38.951601  483868 main.go:141] libmachine: () Calling .GetMachineName
I0116 02:47:38.951840  483868 main.go:141] libmachine: (functional-193417) Calling .GetState
I0116 02:47:38.953955  483868 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0116 02:47:38.954022  483868 main.go:141] libmachine: Launching plugin server for driver kvm2
I0116 02:47:38.970206  483868 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45837
I0116 02:47:38.970655  483868 main.go:141] libmachine: () Calling .GetVersion
I0116 02:47:38.971163  483868 main.go:141] libmachine: Using API Version  1
I0116 02:47:38.971190  483868 main.go:141] libmachine: () Calling .SetConfigRaw
I0116 02:47:38.971516  483868 main.go:141] libmachine: () Calling .GetMachineName
I0116 02:47:38.971830  483868 main.go:141] libmachine: (functional-193417) Calling .DriverName
I0116 02:47:38.972017  483868 ssh_runner.go:195] Run: systemctl --version
I0116 02:47:38.972056  483868 main.go:141] libmachine: (functional-193417) Calling .GetSSHHostname
I0116 02:47:38.975948  483868 main.go:141] libmachine: (functional-193417) DBG | domain functional-193417 has defined MAC address 52:54:00:30:e3:4c in network mk-functional-193417
I0116 02:47:38.976371  483868 main.go:141] libmachine: (functional-193417) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:e3:4c", ip: ""} in network mk-functional-193417: {Iface:virbr1 ExpiryTime:2024-01-16 03:44:24 +0000 UTC Type:0 Mac:52:54:00:30:e3:4c Iaid: IPaddr:192.168.50.41 Prefix:24 Hostname:functional-193417 Clientid:01:52:54:00:30:e3:4c}
I0116 02:47:38.976404  483868 main.go:141] libmachine: (functional-193417) DBG | domain functional-193417 has defined IP address 192.168.50.41 and MAC address 52:54:00:30:e3:4c in network mk-functional-193417
I0116 02:47:38.976633  483868 main.go:141] libmachine: (functional-193417) Calling .GetSSHPort
I0116 02:47:38.976838  483868 main.go:141] libmachine: (functional-193417) Calling .GetSSHKeyPath
I0116 02:47:38.976988  483868 main.go:141] libmachine: (functional-193417) Calling .GetSSHUsername
I0116 02:47:38.977116  483868 sshutil.go:53] new ssh client: &{IP:192.168.50.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/functional-193417/id_rsa Username:docker}
I0116 02:47:39.091004  483868 ssh_runner.go:195] Run: sudo crictl images --output json
I0116 02:47:39.172722  483868 main.go:141] libmachine: Making call to close driver server
I0116 02:47:39.172737  483868 main.go:141] libmachine: (functional-193417) Calling .Close
I0116 02:47:39.173045  483868 main.go:141] libmachine: Successfully made call to close driver server
I0116 02:47:39.173076  483868 main.go:141] libmachine: Making call to close connection to plugin binary
I0116 02:47:39.173093  483868 main.go:141] libmachine: Making call to close driver server
I0116 02:47:39.173104  483868 main.go:141] libmachine: (functional-193417) Calling .Close
I0116 02:47:39.173378  483868 main.go:141] libmachine: Successfully made call to close driver server
I0116 02:47:39.173398  483868 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-193417 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-193417 ssh pgrep buildkitd: exit status 1 (242.340693ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-193417 image build -t localhost/my-image:functional-193417 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-193417 image build -t localhost/my-image:functional-193417 testdata/build --alsologtostderr: (2.526095634s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-193417 image build -t localhost/my-image:functional-193417 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> e85549a8ee9
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-193417
--> 407bb899024
Successfully tagged localhost/my-image:functional-193417
407bb899024512fd338eaa5cee74de19bfc50e554040701b85d1d00eb30138cc
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-193417 image build -t localhost/my-image:functional-193417 testdata/build --alsologtostderr:
I0116 02:47:39.165957  483944 out.go:296] Setting OutFile to fd 1 ...
I0116 02:47:39.166104  483944 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 02:47:39.166114  483944 out.go:309] Setting ErrFile to fd 2...
I0116 02:47:39.166119  483944 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 02:47:39.166347  483944 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17965-468241/.minikube/bin
I0116 02:47:39.167001  483944 config.go:182] Loaded profile config "functional-193417": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0116 02:47:39.167620  483944 config.go:182] Loaded profile config "functional-193417": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0116 02:47:39.168277  483944 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0116 02:47:39.168342  483944 main.go:141] libmachine: Launching plugin server for driver kvm2
I0116 02:47:39.187975  483944 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41261
I0116 02:47:39.188504  483944 main.go:141] libmachine: () Calling .GetVersion
I0116 02:47:39.189227  483944 main.go:141] libmachine: Using API Version  1
I0116 02:47:39.189274  483944 main.go:141] libmachine: () Calling .SetConfigRaw
I0116 02:47:39.189733  483944 main.go:141] libmachine: () Calling .GetMachineName
I0116 02:47:39.189934  483944 main.go:141] libmachine: (functional-193417) Calling .GetState
I0116 02:47:39.192167  483944 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0116 02:47:39.192225  483944 main.go:141] libmachine: Launching plugin server for driver kvm2
I0116 02:47:39.208329  483944 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40295
I0116 02:47:39.208839  483944 main.go:141] libmachine: () Calling .GetVersion
I0116 02:47:39.209406  483944 main.go:141] libmachine: Using API Version  1
I0116 02:47:39.209428  483944 main.go:141] libmachine: () Calling .SetConfigRaw
I0116 02:47:39.209858  483944 main.go:141] libmachine: () Calling .GetMachineName
I0116 02:47:39.210331  483944 main.go:141] libmachine: (functional-193417) Calling .DriverName
I0116 02:47:39.210629  483944 ssh_runner.go:195] Run: systemctl --version
I0116 02:47:39.210669  483944 main.go:141] libmachine: (functional-193417) Calling .GetSSHHostname
I0116 02:47:39.215043  483944 main.go:141] libmachine: (functional-193417) DBG | domain functional-193417 has defined MAC address 52:54:00:30:e3:4c in network mk-functional-193417
I0116 02:47:39.215385  483944 main.go:141] libmachine: (functional-193417) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:e3:4c", ip: ""} in network mk-functional-193417: {Iface:virbr1 ExpiryTime:2024-01-16 03:44:24 +0000 UTC Type:0 Mac:52:54:00:30:e3:4c Iaid: IPaddr:192.168.50.41 Prefix:24 Hostname:functional-193417 Clientid:01:52:54:00:30:e3:4c}
I0116 02:47:39.215407  483944 main.go:141] libmachine: (functional-193417) DBG | domain functional-193417 has defined IP address 192.168.50.41 and MAC address 52:54:00:30:e3:4c in network mk-functional-193417
I0116 02:47:39.215711  483944 main.go:141] libmachine: (functional-193417) Calling .GetSSHPort
I0116 02:47:39.216014  483944 main.go:141] libmachine: (functional-193417) Calling .GetSSHKeyPath
I0116 02:47:39.216204  483944 main.go:141] libmachine: (functional-193417) Calling .GetSSHUsername
I0116 02:47:39.216426  483944 sshutil.go:53] new ssh client: &{IP:192.168.50.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/functional-193417/id_rsa Username:docker}
I0116 02:47:39.317358  483944 build_images.go:151] Building image from path: /tmp/build.804404237.tar
I0116 02:47:39.317435  483944 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0116 02:47:39.332021  483944 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.804404237.tar
I0116 02:47:39.338456  483944 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.804404237.tar: stat -c "%s %y" /var/lib/minikube/build/build.804404237.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.804404237.tar': No such file or directory
I0116 02:47:39.338501  483944 ssh_runner.go:362] scp /tmp/build.804404237.tar --> /var/lib/minikube/build/build.804404237.tar (3072 bytes)
I0116 02:47:39.365801  483944 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.804404237
I0116 02:47:39.377684  483944 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.804404237 -xf /var/lib/minikube/build/build.804404237.tar
I0116 02:47:39.391153  483944 crio.go:297] Building image: /var/lib/minikube/build/build.804404237
I0116 02:47:39.391223  483944 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-193417 /var/lib/minikube/build/build.804404237 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0116 02:47:41.601962  483944 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-193417 /var/lib/minikube/build/build.804404237 --cgroup-manager=cgroupfs: (2.210710474s)
I0116 02:47:41.602039  483944 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.804404237
I0116 02:47:41.613090  483944 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.804404237.tar
I0116 02:47:41.622577  483944 build_images.go:207] Built localhost/my-image:functional-193417 from /tmp/build.804404237.tar
I0116 02:47:41.622642  483944 build_images.go:123] succeeded building to: functional-193417
I0116 02:47:41.622650  483944 build_images.go:124] failed building to: 
I0116 02:47:41.622715  483944 main.go:141] libmachine: Making call to close driver server
I0116 02:47:41.622735  483944 main.go:141] libmachine: (functional-193417) Calling .Close
I0116 02:47:41.623067  483944 main.go:141] libmachine: Successfully made call to close driver server
I0116 02:47:41.623092  483944 main.go:141] libmachine: Making call to close connection to plugin binary
I0116 02:47:41.623100  483944 main.go:141] libmachine: (functional-193417) DBG | Closing plugin on server side
I0116 02:47:41.623108  483944 main.go:141] libmachine: Making call to close driver server
I0116 02:47:41.623125  483944 main.go:141] libmachine: (functional-193417) Calling .Close
I0116 02:47:41.623357  483944 main.go:141] libmachine: (functional-193417) DBG | Closing plugin on server side
I0116 02:47:41.623410  483944 main.go:141] libmachine: Successfully made call to close driver server
I0116 02:47:41.623430  483944 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-193417 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.105436443s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-193417
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-193417 image load --daemon gcr.io/google-containers/addon-resizer:functional-193417 --alsologtostderr
E0116 02:47:21.807450  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/addons-690916/client.crt: no such file or directory
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-193417 image load --daemon gcr.io/google-containers/addon-resizer:functional-193417 --alsologtostderr: (5.177044671s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-193417 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-193417 image load --daemon gcr.io/google-containers/addon-resizer:functional-193417 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-193417 image load --daemon gcr.io/google-containers/addon-resizer:functional-193417 --alsologtostderr: (2.299002711s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-193417 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
E0116 02:47:29.488187  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/addons-690916/client.crt: no such file or directory
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-193417
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-193417 image load --daemon gcr.io/google-containers/addon-resizer:functional-193417 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-193417 image load --daemon gcr.io/google-containers/addon-resizer:functional-193417 --alsologtostderr: (4.020324897s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-193417 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-193417 image save gcr.io/google-containers/addon-resizer:functional-193417 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-193417 image save gcr.io/google-containers/addon-resizer:functional-193417 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.246557398s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-193417 image rm gcr.io/google-containers/addon-resizer:functional-193417 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-193417 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-193417 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-193417 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.281123849s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-193417 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-193417
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-193417 image save --daemon gcr.io/google-containers/addon-resizer:functional-193417 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-193417 image save --daemon gcr.io/google-containers/addon-resizer:functional-193417 --alsologtostderr: (1.21439399s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-193417
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.25s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-193417
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-193417
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-193417
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (80.85s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-873808 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E0116 02:48:00.209719  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/addons-690916/client.crt: no such file or directory
E0116 02:48:41.170481  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/addons-690916/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-873808 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m20.853221602s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (80.85s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (13.58s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-873808 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-873808 addons enable ingress --alsologtostderr -v=5: (13.575626878s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (13.58s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.63s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-873808 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.63s)

                                                
                                    
x
+
TestJSONOutput/start/Command (72.98s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-965440 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0116 02:52:09.642626  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/functional-193417/client.crt: no such file or directory
E0116 02:52:19.246662  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/addons-690916/client.crt: no such file or directory
E0116 02:52:30.123811  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/functional-193417/client.crt: no such file or directory
E0116 02:52:46.931654  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/addons-690916/client.crt: no such file or directory
E0116 02:53:11.085570  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/functional-193417/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-965440 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m12.983502806s)
--- PASS: TestJSONOutput/start/Command (72.98s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.73s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-965440 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.73s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.7s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-965440 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.70s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.12s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-965440 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-965440 --output=json --user=testUser: (7.116235096s)
--- PASS: TestJSONOutput/stop/Command (7.12s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-385921 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-385921 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (88.73912ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ee8b04b4-f15d-4ba9-b853-384d84cabc5a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-385921] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"0c80f33c-5921-45ea-812b-9ee5f998d2f7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17965"}}
	{"specversion":"1.0","id":"e09180ec-eaed-4109-b753-ea65530ee311","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f6062550-9f5b-4d6e-83d7-7dcc4d881772","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17965-468241/kubeconfig"}}
	{"specversion":"1.0","id":"a179a8f9-d32f-40e1-810e-781e48a54ac2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17965-468241/.minikube"}}
	{"specversion":"1.0","id":"9bccf7cb-3b58-416f-8a3d-8a632d58047c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"0cc0b23e-ae46-4956-a62f-c1699dbb9d10","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"3b568394-579a-458a-9c26-94bcb62676db","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-385921" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-385921
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (100.06s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-404446 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-404446 --driver=kvm2  --container-runtime=crio: (48.087112912s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-406801 --driver=kvm2  --container-runtime=crio
E0116 02:54:18.183304  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/ingress-addon-legacy-873808/client.crt: no such file or directory
E0116 02:54:18.188607  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/ingress-addon-legacy-873808/client.crt: no such file or directory
E0116 02:54:18.198897  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/ingress-addon-legacy-873808/client.crt: no such file or directory
E0116 02:54:18.219266  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/ingress-addon-legacy-873808/client.crt: no such file or directory
E0116 02:54:18.259632  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/ingress-addon-legacy-873808/client.crt: no such file or directory
E0116 02:54:18.340119  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/ingress-addon-legacy-873808/client.crt: no such file or directory
E0116 02:54:18.500644  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/ingress-addon-legacy-873808/client.crt: no such file or directory
E0116 02:54:18.821262  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/ingress-addon-legacy-873808/client.crt: no such file or directory
E0116 02:54:19.462425  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/ingress-addon-legacy-873808/client.crt: no such file or directory
E0116 02:54:20.742945  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/ingress-addon-legacy-873808/client.crt: no such file or directory
E0116 02:54:23.303229  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/ingress-addon-legacy-873808/client.crt: no such file or directory
E0116 02:54:28.424410  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/ingress-addon-legacy-873808/client.crt: no such file or directory
E0116 02:54:33.006660  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/functional-193417/client.crt: no such file or directory
E0116 02:54:38.665279  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/ingress-addon-legacy-873808/client.crt: no such file or directory
E0116 02:54:59.146376  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/ingress-addon-legacy-873808/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-406801 --driver=kvm2  --container-runtime=crio: (49.453077775s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-404446
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-406801
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-406801" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-406801
helpers_test.go:175: Cleaning up "first-404446" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-404446
--- PASS: TestMinikubeProfile (100.06s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (26.82s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-538527 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-538527 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (25.8191483s)
--- PASS: TestMountStart/serial/StartWithMountFirst (26.82s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.43s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-538527 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-538527 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.43s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (25.73s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-560349 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0116 02:55:40.106769  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/ingress-addon-legacy-873808/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-560349 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (24.733500533s)
--- PASS: TestMountStart/serial/StartWithMountSecond (25.73s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.42s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-560349 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-560349 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.42s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.57s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-538527 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.57s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.43s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-560349 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-560349 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.43s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.11s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-560349
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-560349: (1.11216625s)
--- PASS: TestMountStart/serial/Stop (1.11s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (21.05s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-560349
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-560349: (20.046582278s)
--- PASS: TestMountStart/serial/RestartStopped (21.05s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.43s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-560349 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-560349 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.43s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (106.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-405494 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0116 02:56:49.160888  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/functional-193417/client.crt: no such file or directory
E0116 02:57:02.027684  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/ingress-addon-legacy-873808/client.crt: no such file or directory
E0116 02:57:16.846868  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/functional-193417/client.crt: no such file or directory
E0116 02:57:19.246467  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/addons-690916/client.crt: no such file or directory
multinode_test.go:86: (dbg) Done: out/minikube-linux-amd64 start -p multinode-405494 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m46.332921694s)
multinode_test.go:92: (dbg) Run:  out/minikube-linux-amd64 -p multinode-405494 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (106.77s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-405494 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-405494 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-405494 -- rollout status deployment/busybox: (2.781565606s)
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-405494 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-405494 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-405494 -- exec busybox-5bc68d56bd-pkhcp -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-405494 -- exec busybox-5bc68d56bd-r9bv6 -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-405494 -- exec busybox-5bc68d56bd-pkhcp -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-405494 -- exec busybox-5bc68d56bd-r9bv6 -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-405494 -- exec busybox-5bc68d56bd-pkhcp -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-405494 -- exec busybox-5bc68d56bd-r9bv6 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.75s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (41.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-405494 -v 3 --alsologtostderr
multinode_test.go:111: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-405494 -v 3 --alsologtostderr: (40.848469583s)
multinode_test.go:117: (dbg) Run:  out/minikube-linux-amd64 -p multinode-405494 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (41.47s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-405494 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.23s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (8.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p multinode-405494 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-405494 cp testdata/cp-test.txt multinode-405494:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-405494 ssh -n multinode-405494 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-405494 cp multinode-405494:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2786900052/001/cp-test_multinode-405494.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-405494 ssh -n multinode-405494 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-405494 cp multinode-405494:/home/docker/cp-test.txt multinode-405494-m02:/home/docker/cp-test_multinode-405494_multinode-405494-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-405494 ssh -n multinode-405494 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-405494 ssh -n multinode-405494-m02 "sudo cat /home/docker/cp-test_multinode-405494_multinode-405494-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-405494 cp multinode-405494:/home/docker/cp-test.txt multinode-405494-m03:/home/docker/cp-test_multinode-405494_multinode-405494-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-405494 ssh -n multinode-405494 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-405494 ssh -n multinode-405494-m03 "sudo cat /home/docker/cp-test_multinode-405494_multinode-405494-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-405494 cp testdata/cp-test.txt multinode-405494-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-405494 ssh -n multinode-405494-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-405494 cp multinode-405494-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2786900052/001/cp-test_multinode-405494-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-405494 ssh -n multinode-405494-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-405494 cp multinode-405494-m02:/home/docker/cp-test.txt multinode-405494:/home/docker/cp-test_multinode-405494-m02_multinode-405494.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-405494 ssh -n multinode-405494-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-405494 ssh -n multinode-405494 "sudo cat /home/docker/cp-test_multinode-405494-m02_multinode-405494.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-405494 cp multinode-405494-m02:/home/docker/cp-test.txt multinode-405494-m03:/home/docker/cp-test_multinode-405494-m02_multinode-405494-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-405494 ssh -n multinode-405494-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-405494 ssh -n multinode-405494-m03 "sudo cat /home/docker/cp-test_multinode-405494-m02_multinode-405494-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-405494 cp testdata/cp-test.txt multinode-405494-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-405494 ssh -n multinode-405494-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-405494 cp multinode-405494-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2786900052/001/cp-test_multinode-405494-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-405494 ssh -n multinode-405494-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-405494 cp multinode-405494-m03:/home/docker/cp-test.txt multinode-405494:/home/docker/cp-test_multinode-405494-m03_multinode-405494.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-405494 ssh -n multinode-405494-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-405494 ssh -n multinode-405494 "sudo cat /home/docker/cp-test_multinode-405494-m03_multinode-405494.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-405494 cp multinode-405494-m03:/home/docker/cp-test.txt multinode-405494-m02:/home/docker/cp-test_multinode-405494-m03_multinode-405494-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-405494 ssh -n multinode-405494-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-405494 ssh -n multinode-405494-m02 "sudo cat /home/docker/cp-test_multinode-405494-m03_multinode-405494-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (8.07s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p multinode-405494 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-linux-amd64 -p multinode-405494 node stop m03: (2.101370707s)
multinode_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p multinode-405494 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-405494 status: exit status 7 (472.00609ms)

                                                
                                                
-- stdout --
	multinode-405494
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-405494-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-405494-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:251: (dbg) Run:  out/minikube-linux-amd64 -p multinode-405494 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-405494 status --alsologtostderr: exit status 7 (467.450675ms)

                                                
                                                
-- stdout --
	multinode-405494
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-405494-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-405494-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0116 02:59:13.578857  490460 out.go:296] Setting OutFile to fd 1 ...
	I0116 02:59:13.579127  490460 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:59:13.579137  490460 out.go:309] Setting ErrFile to fd 2...
	I0116 02:59:13.579144  490460 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:59:13.579335  490460 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17965-468241/.minikube/bin
	I0116 02:59:13.579519  490460 out.go:303] Setting JSON to false
	I0116 02:59:13.579558  490460 mustload.go:65] Loading cluster: multinode-405494
	I0116 02:59:13.579649  490460 notify.go:220] Checking for updates...
	I0116 02:59:13.579994  490460 config.go:182] Loaded profile config "multinode-405494": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 02:59:13.580014  490460 status.go:255] checking status of multinode-405494 ...
	I0116 02:59:13.580513  490460 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:59:13.580595  490460 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:59:13.601243  490460 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34105
	I0116 02:59:13.601774  490460 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:59:13.602342  490460 main.go:141] libmachine: Using API Version  1
	I0116 02:59:13.602363  490460 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:59:13.602735  490460 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:59:13.602954  490460 main.go:141] libmachine: (multinode-405494) Calling .GetState
	I0116 02:59:13.604817  490460 status.go:330] multinode-405494 host status = "Running" (err=<nil>)
	I0116 02:59:13.604838  490460 host.go:66] Checking if "multinode-405494" exists ...
	I0116 02:59:13.605135  490460 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:59:13.605177  490460 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:59:13.620768  490460 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37943
	I0116 02:59:13.621232  490460 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:59:13.621750  490460 main.go:141] libmachine: Using API Version  1
	I0116 02:59:13.621778  490460 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:59:13.622142  490460 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:59:13.622349  490460 main.go:141] libmachine: (multinode-405494) Calling .GetIP
	I0116 02:59:13.625537  490460 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 02:59:13.625953  490460 main.go:141] libmachine: (multinode-405494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:49:7b", ip: ""} in network mk-multinode-405494: {Iface:virbr1 ExpiryTime:2024-01-16 03:56:41 +0000 UTC Type:0 Mac:52:54:00:b0:49:7b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:multinode-405494 Clientid:01:52:54:00:b0:49:7b}
	I0116 02:59:13.625993  490460 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined IP address 192.168.39.70 and MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 02:59:13.626167  490460 host.go:66] Checking if "multinode-405494" exists ...
	I0116 02:59:13.626517  490460 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:59:13.626566  490460 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:59:13.641838  490460 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34527
	I0116 02:59:13.642348  490460 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:59:13.642900  490460 main.go:141] libmachine: Using API Version  1
	I0116 02:59:13.642942  490460 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:59:13.643318  490460 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:59:13.643518  490460 main.go:141] libmachine: (multinode-405494) Calling .DriverName
	I0116 02:59:13.643731  490460 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0116 02:59:13.643755  490460 main.go:141] libmachine: (multinode-405494) Calling .GetSSHHostname
	I0116 02:59:13.646565  490460 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 02:59:13.647003  490460 main.go:141] libmachine: (multinode-405494) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:49:7b", ip: ""} in network mk-multinode-405494: {Iface:virbr1 ExpiryTime:2024-01-16 03:56:41 +0000 UTC Type:0 Mac:52:54:00:b0:49:7b Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:multinode-405494 Clientid:01:52:54:00:b0:49:7b}
	I0116 02:59:13.647037  490460 main.go:141] libmachine: (multinode-405494) DBG | domain multinode-405494 has defined IP address 192.168.39.70 and MAC address 52:54:00:b0:49:7b in network mk-multinode-405494
	I0116 02:59:13.647164  490460 main.go:141] libmachine: (multinode-405494) Calling .GetSSHPort
	I0116 02:59:13.647346  490460 main.go:141] libmachine: (multinode-405494) Calling .GetSSHKeyPath
	I0116 02:59:13.647486  490460 main.go:141] libmachine: (multinode-405494) Calling .GetSSHUsername
	I0116 02:59:13.647617  490460 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/multinode-405494/id_rsa Username:docker}
	I0116 02:59:13.732105  490460 ssh_runner.go:195] Run: systemctl --version
	I0116 02:59:13.738013  490460 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 02:59:13.753768  490460 kubeconfig.go:92] found "multinode-405494" server: "https://192.168.39.70:8443"
	I0116 02:59:13.753805  490460 api_server.go:166] Checking apiserver status ...
	I0116 02:59:13.753856  490460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 02:59:13.767347  490460 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1062/cgroup
	I0116 02:59:13.777582  490460 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/pod04bffd1a6d3ee0aae068c41e37830c9b/crio-537fb6a84e23775f1b592ea5040f1648a20bc4b8a721687aae38b405643997cf"
	I0116 02:59:13.777656  490460 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod04bffd1a6d3ee0aae068c41e37830c9b/crio-537fb6a84e23775f1b592ea5040f1648a20bc4b8a721687aae38b405643997cf/freezer.state
	I0116 02:59:13.788613  490460 api_server.go:204] freezer state: "THAWED"
	I0116 02:59:13.788663  490460 api_server.go:253] Checking apiserver healthz at https://192.168.39.70:8443/healthz ...
	I0116 02:59:13.796186  490460 api_server.go:279] https://192.168.39.70:8443/healthz returned 200:
	ok
	I0116 02:59:13.796217  490460 status.go:421] multinode-405494 apiserver status = Running (err=<nil>)
	I0116 02:59:13.796228  490460 status.go:257] multinode-405494 status: &{Name:multinode-405494 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0116 02:59:13.796250  490460 status.go:255] checking status of multinode-405494-m02 ...
	I0116 02:59:13.796559  490460 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:59:13.796594  490460 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:59:13.812993  490460 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41201
	I0116 02:59:13.813668  490460 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:59:13.814302  490460 main.go:141] libmachine: Using API Version  1
	I0116 02:59:13.814336  490460 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:59:13.814760  490460 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:59:13.814983  490460 main.go:141] libmachine: (multinode-405494-m02) Calling .GetState
	I0116 02:59:13.816603  490460 status.go:330] multinode-405494-m02 host status = "Running" (err=<nil>)
	I0116 02:59:13.816626  490460 host.go:66] Checking if "multinode-405494-m02" exists ...
	I0116 02:59:13.817004  490460 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:59:13.817044  490460 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:59:13.832880  490460 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36103
	I0116 02:59:13.833331  490460 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:59:13.833862  490460 main.go:141] libmachine: Using API Version  1
	I0116 02:59:13.833891  490460 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:59:13.834265  490460 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:59:13.834477  490460 main.go:141] libmachine: (multinode-405494-m02) Calling .GetIP
	I0116 02:59:13.837664  490460 main.go:141] libmachine: (multinode-405494-m02) DBG | domain multinode-405494-m02 has defined MAC address 52:54:00:3c:08:8b in network mk-multinode-405494
	I0116 02:59:13.838256  490460 main.go:141] libmachine: (multinode-405494-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:08:8b", ip: ""} in network mk-multinode-405494: {Iface:virbr1 ExpiryTime:2024-01-16 03:57:47 +0000 UTC Type:0 Mac:52:54:00:3c:08:8b Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-405494-m02 Clientid:01:52:54:00:3c:08:8b}
	I0116 02:59:13.838290  490460 main.go:141] libmachine: (multinode-405494-m02) DBG | domain multinode-405494-m02 has defined IP address 192.168.39.32 and MAC address 52:54:00:3c:08:8b in network mk-multinode-405494
	I0116 02:59:13.838440  490460 host.go:66] Checking if "multinode-405494-m02" exists ...
	I0116 02:59:13.838813  490460 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:59:13.838861  490460 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:59:13.854657  490460 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33489
	I0116 02:59:13.855254  490460 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:59:13.855878  490460 main.go:141] libmachine: Using API Version  1
	I0116 02:59:13.855923  490460 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:59:13.856341  490460 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:59:13.856579  490460 main.go:141] libmachine: (multinode-405494-m02) Calling .DriverName
	I0116 02:59:13.856826  490460 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0116 02:59:13.856855  490460 main.go:141] libmachine: (multinode-405494-m02) Calling .GetSSHHostname
	I0116 02:59:13.860184  490460 main.go:141] libmachine: (multinode-405494-m02) DBG | domain multinode-405494-m02 has defined MAC address 52:54:00:3c:08:8b in network mk-multinode-405494
	I0116 02:59:13.860716  490460 main.go:141] libmachine: (multinode-405494-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:08:8b", ip: ""} in network mk-multinode-405494: {Iface:virbr1 ExpiryTime:2024-01-16 03:57:47 +0000 UTC Type:0 Mac:52:54:00:3c:08:8b Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-405494-m02 Clientid:01:52:54:00:3c:08:8b}
	I0116 02:59:13.860752  490460 main.go:141] libmachine: (multinode-405494-m02) DBG | domain multinode-405494-m02 has defined IP address 192.168.39.32 and MAC address 52:54:00:3c:08:8b in network mk-multinode-405494
	I0116 02:59:13.860874  490460 main.go:141] libmachine: (multinode-405494-m02) Calling .GetSSHPort
	I0116 02:59:13.861097  490460 main.go:141] libmachine: (multinode-405494-m02) Calling .GetSSHKeyPath
	I0116 02:59:13.861260  490460 main.go:141] libmachine: (multinode-405494-m02) Calling .GetSSHUsername
	I0116 02:59:13.861381  490460 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17965-468241/.minikube/machines/multinode-405494-m02/id_rsa Username:docker}
	I0116 02:59:13.947820  490460 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 02:59:13.962300  490460 status.go:257] multinode-405494-m02 status: &{Name:multinode-405494-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0116 02:59:13.962346  490460 status.go:255] checking status of multinode-405494-m03 ...
	I0116 02:59:13.962710  490460 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0116 02:59:13.962771  490460 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0116 02:59:13.980127  490460 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43079
	I0116 02:59:13.980670  490460 main.go:141] libmachine: () Calling .GetVersion
	I0116 02:59:13.981222  490460 main.go:141] libmachine: Using API Version  1
	I0116 02:59:13.981252  490460 main.go:141] libmachine: () Calling .SetConfigRaw
	I0116 02:59:13.981612  490460 main.go:141] libmachine: () Calling .GetMachineName
	I0116 02:59:13.981864  490460 main.go:141] libmachine: (multinode-405494-m03) Calling .GetState
	I0116 02:59:13.983588  490460 status.go:330] multinode-405494-m03 host status = "Stopped" (err=<nil>)
	I0116 02:59:13.983611  490460 status.go:343] host is not running, skipping remaining checks
	I0116 02:59:13.983617  490460 status.go:257] multinode-405494-m03 status: &{Name:multinode-405494-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.04s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (28.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-405494 node start m03 --alsologtostderr
E0116 02:59:18.183613  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/ingress-addon-legacy-873808/client.crt: no such file or directory
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-405494 node start m03 --alsologtostderr: (27.822249012s)
multinode_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p multinode-405494 status
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (28.51s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-405494 node delete m03
multinode_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p multinode-405494 status --alsologtostderr
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.52s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (447.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-405494 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0116 03:14:18.182873  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/ingress-addon-legacy-873808/client.crt: no such file or directory
E0116 03:16:49.160696  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/functional-193417/client.crt: no such file or directory
E0116 03:17:19.245834  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/addons-690916/client.crt: no such file or directory
E0116 03:19:18.182861  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/ingress-addon-legacy-873808/client.crt: no such file or directory
E0116 03:20:22.294641  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/addons-690916/client.crt: no such file or directory
multinode_test.go:382: (dbg) Done: out/minikube-linux-amd64 start -p multinode-405494 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (7m26.633470089s)
multinode_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p multinode-405494 status --alsologtostderr
multinode_test.go:402: (dbg) Run:  kubectl get nodes
multinode_test.go:410: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (447.23s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (51.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-405494
multinode_test.go:480: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-405494-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-405494-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (84.61062ms)

                                                
                                                
-- stdout --
	* [multinode-405494-m02] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17965
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17965-468241/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17965-468241/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-405494-m02' is duplicated with machine name 'multinode-405494-m02' in profile 'multinode-405494'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-405494-m03 --driver=kvm2  --container-runtime=crio
E0116 03:21:49.160631  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/functional-193417/client.crt: no such file or directory
multinode_test.go:488: (dbg) Done: out/minikube-linux-amd64 start -p multinode-405494-m03 --driver=kvm2  --container-runtime=crio: (50.474337408s)
multinode_test.go:495: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-405494
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-405494: exit status 80 (261.974335ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-405494
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-405494-m03 already exists in multinode-405494-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:500: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-405494-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (51.56s)

                                                
                                    
x
+
TestScheduledStopUnix (116.53s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-716212 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-716212 --memory=2048 --driver=kvm2  --container-runtime=crio: (44.59842724s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-716212 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-716212 -n scheduled-stop-716212
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-716212 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-716212 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-716212 -n scheduled-stop-716212
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-716212
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-716212 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0116 03:26:49.161309  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/functional-193417/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-716212
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-716212: exit status 7 (87.534034ms)

                                                
                                                
-- stdout --
	scheduled-stop-716212
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-716212 -n scheduled-stop-716212
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-716212 -n scheduled-stop-716212: exit status 7 (86.894426ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-716212" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-716212
--- PASS: TestScheduledStopUnix (116.53s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (244.55s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.784354823 start -p running-upgrade-455314 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E0116 03:27:19.246328  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/addons-690916/client.crt: no such file or directory
E0116 03:27:21.229883  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/ingress-addon-legacy-873808/client.crt: no such file or directory
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.784354823 start -p running-upgrade-455314 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m22.273218072s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-455314 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-455314 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m40.553944881s)
helpers_test.go:175: Cleaning up "running-upgrade-455314" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-455314
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-455314: (1.238997473s)
--- PASS: TestRunningBinaryUpgrade (244.55s)

                                                
                                    
x
+
TestKubernetesUpgrade (254.71s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-583688 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-583688 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m48.90727572s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-583688
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-583688: (3.128769618s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-583688 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-583688 status --format={{.Host}}: exit status 7 (101.782709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-583688 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-583688 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (40.595045784s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-583688 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-583688 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-583688 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=crio: exit status 106 (109.393632ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-583688] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17965
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17965-468241/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17965-468241/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-583688
	    minikube start -p kubernetes-upgrade-583688 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-5836882 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-583688 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-583688 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-583688 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m40.283744504s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-583688" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-583688
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-583688: (1.518728632s)
--- PASS: TestKubernetesUpgrade (254.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-422658 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-422658 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (111.735084ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-422658] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17965
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17965-468241/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17965-468241/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (110.8s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-422658 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-422658 --driver=kvm2  --container-runtime=crio: (1m50.51509482s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-422658 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (110.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-087557 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-087557 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (152.916436ms)

                                                
                                                
-- stdout --
	* [false-087557] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17965
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17965-468241/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17965-468241/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0116 03:28:14.266571  499172 out.go:296] Setting OutFile to fd 1 ...
	I0116 03:28:14.266894  499172 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:28:14.266906  499172 out.go:309] Setting ErrFile to fd 2...
	I0116 03:28:14.266911  499172 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:28:14.267133  499172 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17965-468241/.minikube/bin
	I0116 03:28:14.267745  499172 out.go:303] Setting JSON to false
	I0116 03:28:14.268804  499172 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":15046,"bootTime":1705360648,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0116 03:28:14.268878  499172 start.go:138] virtualization: kvm guest
	I0116 03:28:14.271546  499172 out.go:177] * [false-087557] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0116 03:28:14.273444  499172 out.go:177]   - MINIKUBE_LOCATION=17965
	I0116 03:28:14.273507  499172 notify.go:220] Checking for updates...
	I0116 03:28:14.275013  499172 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 03:28:14.276621  499172 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17965-468241/kubeconfig
	I0116 03:28:14.278221  499172 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17965-468241/.minikube
	I0116 03:28:14.279820  499172 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0116 03:28:14.281285  499172 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0116 03:28:14.283254  499172 config.go:182] Loaded profile config "NoKubernetes-422658": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 03:28:14.283364  499172 config.go:182] Loaded profile config "offline-crio-431037": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 03:28:14.283437  499172 config.go:182] Loaded profile config "running-upgrade-455314": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0116 03:28:14.283524  499172 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 03:28:14.330138  499172 out.go:177] * Using the kvm2 driver based on user configuration
	I0116 03:28:14.332878  499172 start.go:298] selected driver: kvm2
	I0116 03:28:14.332900  499172 start.go:902] validating driver "kvm2" against <nil>
	I0116 03:28:14.332918  499172 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0116 03:28:14.335612  499172 out.go:177] 
	W0116 03:28:14.337210  499172 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0116 03:28:14.338766  499172 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-087557 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-087557

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-087557

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-087557

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-087557

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-087557

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-087557

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-087557

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-087557

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-087557

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-087557

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-087557"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-087557"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-087557"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-087557

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-087557"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-087557"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-087557" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-087557" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-087557" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-087557" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-087557" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-087557" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-087557" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-087557" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-087557"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-087557"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-087557"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-087557"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-087557"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-087557" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-087557" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-087557" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-087557"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-087557"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-087557"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-087557"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-087557"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-087557

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-087557"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-087557"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-087557"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-087557"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-087557"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-087557"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-087557"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-087557"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-087557"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-087557"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-087557"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-087557"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-087557"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-087557"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-087557"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-087557"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-087557"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-087557"

                                                
                                                
----------------------- debugLogs end: false-087557 [took: 5.581425966s] --------------------------------
helpers_test.go:175: Cleaning up "false-087557" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-087557
--- PASS: TestNetworkPlugins/group/false (5.91s)

                                                
                                    
x
+
TestPause/serial/Start (105.63s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-647476 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-647476 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m45.632814002s)
--- PASS: TestPause/serial/Start (105.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (62.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-422658 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0116 03:29:18.183266  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/ingress-addon-legacy-873808/client.crt: no such file or directory
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-422658 --no-kubernetes --driver=kvm2  --container-runtime=crio: (1m0.944184836s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-422658 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-422658 status -o json: exit status 2 (300.028914ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-422658","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-422658
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (62.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (27.45s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-422658 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-422658 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.448429556s)
--- PASS: TestNoKubernetes/serial/Start (27.45s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (54.98s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-647476 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-647476 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (54.951371269s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (54.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-422658 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-422658 "sudo systemctl is-active --quiet service kubelet": exit status 1 (227.58571ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (30.85s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (13.061343822s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (17.78624785s)
--- PASS: TestNoKubernetes/serial/ProfileList (30.85s)

                                                
                                    
x
+
TestPause/serial/Pause (1.05s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-647476 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-647476 --alsologtostderr -v=5: (1.053580279s)
--- PASS: TestPause/serial/Pause (1.05s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.32s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-647476 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-647476 --output=json --layout=cluster: exit status 2 (321.190181ms)

                                                
                                                
-- stdout --
	{"Name":"pause-647476","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-647476","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.32s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.9s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-647476 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.90s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-422658
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-422658: (1.387010005s)
--- PASS: TestNoKubernetes/serial/Stop (1.39s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.24s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-647476 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-647476 --alsologtostderr -v=5: (1.239497313s)
--- PASS: TestPause/serial/PauseAgain (1.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (22.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-422658 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-422658 --driver=kvm2  --container-runtime=crio: (22.099719788s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (22.10s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.19s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-647476 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-647476 --alsologtostderr -v=5: (2.186569369s)
--- PASS: TestPause/serial/DeletePaused (2.19s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (3.64s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.641258606s)
--- PASS: TestPause/serial/VerifyDeletedResources (3.64s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.61s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.61s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (146.97s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1370195444 start -p stopped-upgrade-581385 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1370195444 start -p stopped-upgrade-581385 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m9.063258518s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1370195444 -p stopped-upgrade-581385 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1370195444 -p stopped-upgrade-581385 stop: (2.154859004s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-581385 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-581385 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m15.750544312s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (146.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-422658 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-422658 "sudo systemctl is-active --quiet service kubelet": exit status 1 (243.413ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (157.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-696770 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-696770 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0: (2m37.371612023s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (157.37s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.33s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-581385
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-581385: (1.331452221s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (120.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-666547 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-666547 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (2m0.394703303s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (120.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (93.47s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-615980 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
E0116 03:34:18.182891  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/ingress-addon-legacy-873808/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-615980 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (1m33.474086449s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (93.47s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-615980 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2d4b2613-4eff-44e7-a924-9c572255df34] Pending
helpers_test.go:344: "busybox" [2d4b2613-4eff-44e7-a924-9c572255df34] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [2d4b2613-4eff-44e7-a924-9c572255df34] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.004439928s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-615980 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-666547 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2aefa743-29a1-416e-be78-70088fafa6ae] Pending
helpers_test.go:344: "busybox" [2aefa743-29a1-416e-be78-70088fafa6ae] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [2aefa743-29a1-416e-be78-70088fafa6ae] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.004739792s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-666547 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-615980 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-615980 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.15936561s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-615980 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-666547 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-666547 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.127426046s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-666547 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-696770 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c0a0c418-327b-4386-8aba-a62bf0f20276] Pending
helpers_test.go:344: "busybox" [c0a0c418-327b-4386-8aba-a62bf0f20276] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c0a0c418-327b-4386-8aba-a62bf0f20276] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.004216953s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-696770 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.41s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-696770 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-696770 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (64.69s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-434445 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
E0116 03:36:49.161166  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/functional-193417/client.crt: no such file or directory
E0116 03:37:02.295357  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/addons-690916/client.crt: no such file or directory
E0116 03:37:19.246606  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/addons-690916/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-434445 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (1m4.691010391s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (64.69s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-434445 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [3f347086-cbef-4c9e-b11c-1a72f9c19ae7] Pending
helpers_test.go:344: "busybox" [3f347086-cbef-4c9e-b11c-1a72f9c19ae7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [3f347086-cbef-4c9e-b11c-1a72f9c19ae7] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.005053797s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-434445 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-434445 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-434445 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.222793918s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-434445 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (693.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-615980 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-615980 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (11m32.986260062s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-615980 -n embed-certs-615980
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (693.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (602s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-666547 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-666547 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (10m1.701037995s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-666547 -n no-preload-666547
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (602.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (734.63s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-696770 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0
E0116 03:39:18.182990  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/ingress-addon-legacy-873808/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-696770 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0: (12m14.342032721s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-696770 -n old-k8s-version-696770
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (734.63s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (528.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-434445 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
E0116 03:41:32.212617  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/functional-193417/client.crt: no such file or directory
E0116 03:41:49.160759  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/functional-193417/client.crt: no such file or directory
E0116 03:42:19.246293  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/addons-690916/client.crt: no such file or directory
E0116 03:44:01.230371  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/ingress-addon-legacy-873808/client.crt: no such file or directory
E0116 03:44:18.183197  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/ingress-addon-legacy-873808/client.crt: no such file or directory
E0116 03:46:49.161176  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/functional-193417/client.crt: no such file or directory
E0116 03:47:19.246781  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/addons-690916/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-434445 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (8m48.131476149s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-434445 -n default-k8s-diff-port-434445
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (528.43s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (61.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-889166 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-889166 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (1m1.189870613s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (61.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (87.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-087557 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-087557 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m27.772780552s)
--- PASS: TestNetworkPlugins/group/auto/Start (87.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (84.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-087557 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
E0116 04:04:18.182669  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/ingress-addon-legacy-873808/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-087557 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m24.643338027s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (84.64s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.7s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-889166 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-889166 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.698405588s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.70s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (12.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-889166 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-889166 --alsologtostderr -v=3: (12.185017237s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (12.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-889166 -n newest-cni-889166
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-889166 -n newest-cni-889166: exit status 7 (106.123996ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-889166 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (55.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-889166 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-889166 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (54.634638257s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-889166 -n newest-cni-889166
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (55.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-087557 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (14.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-087557 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-q5nl9" [fe743290-7631-4a75-9592-da7956ff06fa] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-q5nl9" [fe743290-7631-4a75-9592-da7956ff06fa] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 14.004959365s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (14.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-087557 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-087557 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-087557 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-gslkl" [f18e1619-f71f-4a6f-af5d-f8f84ce46c89] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.006494428s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-087557 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (13.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-087557 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-hn8zn" [97d6d502-89c7-4118-ab1b-ad548e4e41fc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-hn8zn" [97d6d502-89c7-4118-ab1b-ad548e4e41fc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 13.004216633s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (13.38s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.41s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-889166 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.41s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-889166 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-889166 -n newest-cni-889166
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-889166 -n newest-cni-889166: exit status 2 (320.600055ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-889166 -n newest-cni-889166
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-889166 -n newest-cni-889166: exit status 2 (328.247592ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-889166 --alsologtostderr -v=1
E0116 04:05:47.041977  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/no-preload-666547/client.crt: no such file or directory
E0116 04:05:47.047376  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/no-preload-666547/client.crt: no such file or directory
E0116 04:05:47.057785  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/no-preload-666547/client.crt: no such file or directory
E0116 04:05:47.078136  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/no-preload-666547/client.crt: no such file or directory
E0116 04:05:47.118510  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/no-preload-666547/client.crt: no such file or directory
E0116 04:05:47.199586  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/no-preload-666547/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-889166 -n newest-cni-889166
E0116 04:05:47.359754  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/no-preload-666547/client.crt: no such file or directory
E0116 04:05:47.680822  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/no-preload-666547/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-889166 -n newest-cni-889166
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (96.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-087557 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
E0116 04:05:49.601458  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/no-preload-666547/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-087557 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m36.801008827s)
--- PASS: TestNetworkPlugins/group/calico/Start (96.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (116.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-087557 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
E0116 04:05:52.162418  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/no-preload-666547/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-087557 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m56.109405135s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (116.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-087557 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-087557 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-087557 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0116 04:05:57.283578  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/no-preload-666547/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (104.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-087557 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
E0116 04:06:17.256059  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/old-k8s-version-696770/client.crt: no such file or directory
E0116 04:06:27.496907  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/old-k8s-version-696770/client.crt: no such file or directory
E0116 04:06:28.005097  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/no-preload-666547/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-087557 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m44.41424277s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (104.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (126.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-087557 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
E0116 04:06:47.977745  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/old-k8s-version-696770/client.crt: no such file or directory
E0116 04:06:49.160713  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/functional-193417/client.crt: no such file or directory
E0116 04:07:08.966084  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/no-preload-666547/client.crt: no such file or directory
E0116 04:07:19.246150  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/addons-690916/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-087557 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (2m6.259546963s)
--- PASS: TestNetworkPlugins/group/flannel/Start (126.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-q4xf2" [d0983068-13dc-49d4-b9af-4ccd9cea5337] Running
E0116 04:07:28.949789  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/old-k8s-version-696770/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.007360797s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-087557 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-087557 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-74fjc" [b0259e4a-7329-4ab4-85b8-b9f4f0dcca58] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0116 04:07:33.750206  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/default-k8s-diff-port-434445/client.crt: no such file or directory
E0116 04:07:33.755580  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/default-k8s-diff-port-434445/client.crt: no such file or directory
E0116 04:07:33.765990  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/default-k8s-diff-port-434445/client.crt: no such file or directory
E0116 04:07:33.786358  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/default-k8s-diff-port-434445/client.crt: no such file or directory
E0116 04:07:33.826685  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/default-k8s-diff-port-434445/client.crt: no such file or directory
E0116 04:07:33.907372  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/default-k8s-diff-port-434445/client.crt: no such file or directory
E0116 04:07:34.067905  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/default-k8s-diff-port-434445/client.crt: no such file or directory
E0116 04:07:34.388903  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/default-k8s-diff-port-434445/client.crt: no such file or directory
E0116 04:07:35.029223  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/default-k8s-diff-port-434445/client.crt: no such file or directory
E0116 04:07:36.309446  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/default-k8s-diff-port-434445/client.crt: no such file or directory
E0116 04:07:38.870328  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/default-k8s-diff-port-434445/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-74fjc" [b0259e4a-7329-4ab4-85b8-b9f4f0dcca58] Running
E0116 04:07:43.991245  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/default-k8s-diff-port-434445/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.247894357s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-087557 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-087557 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-087557 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-087557 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-087557 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-276p4" [dfb47ed5-bc82-4d89-b034-31f6b0f04f3c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0116 04:07:54.231961  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/default-k8s-diff-port-434445/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-276p4" [dfb47ed5-bc82-4d89-b034-31f6b0f04f3c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.00534414s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-087557 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-087557 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-5dc2h" [35102514-c0d5-4489-9889-cc30cedef515] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-5dc2h" [35102514-c0d5-4489-9889-cc30cedef515] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.004588142s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-087557 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-087557 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-087557 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (78.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-087557 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-087557 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m18.643826334s)
--- PASS: TestNetworkPlugins/group/bridge/Start (78.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-087557 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-087557 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-087557 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-ng424" [3c843ab4-22ba-4cd5-b57c-5166191c1535] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005518117s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-087557 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-087557 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-k6vnn" [6120174a-6f5a-48dc-9ac3-2cef19b09203] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0116 04:08:50.870361  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/old-k8s-version-696770/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-k6vnn" [6120174a-6f5a-48dc-9ac3-2cef19b09203] Running
E0116 04:08:55.673011  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/default-k8s-diff-port-434445/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.006139074s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-087557 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-087557 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-087557 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-087557 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-087557 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-jg6rk" [a4adbc60-1151-4116-a8a0-4e16143ab3ed] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-jg6rk" [a4adbc60-1151-4116-a8a0-4e16143ab3ed] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.005504597s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-087557 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-087557 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-087557 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    

Test skip (39/310)

Order skiped test Duration
5 TestDownloadOnly/v1.16.0/cached-images 0
6 TestDownloadOnly/v1.16.0/binaries 0
7 TestDownloadOnly/v1.16.0/kubectl 0
14 TestDownloadOnly/v1.28.4/cached-images 0
15 TestDownloadOnly/v1.28.4/binaries 0
16 TestDownloadOnly/v1.28.4/kubectl 0
23 TestDownloadOnly/v1.29.0-rc.2/cached-images 0
24 TestDownloadOnly/v1.29.0-rc.2/binaries 0
25 TestDownloadOnly/v1.29.0-rc.2/kubectl 0
29 TestDownloadOnlyKic 0
43 TestAddons/parallel/Olm 0
56 TestDockerFlags 0
59 TestDockerEnvContainerd 0
61 TestHyperKitDriverInstallOrUpdate 0
62 TestHyperkitDriverSkipUpgrade 0
113 TestFunctional/parallel/DockerEnv 0
114 TestFunctional/parallel/PodmanEnv 0
136 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
137 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
138 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
139 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
140 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
141 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
142 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
143 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
162 TestGvisorAddon 0
163 TestImageBuild 0
196 TestKicCustomNetwork 0
197 TestKicExistingNetwork 0
198 TestKicCustomSubnet 0
199 TestKicStaticIP 0
231 TestChangeNoneUser 0
234 TestScheduledStopWindows 0
236 TestSkaffold 0
238 TestInsufficientStorage 0
242 TestMissingContainerUpgrade 0
250 TestStartStop/group/disable-driver-mounts 0.16
255 TestNetworkPlugins/group/kubenet 4.4
263 TestNetworkPlugins/group/cilium 4.93
x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
E0116 02:47:19.323565  475478 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-468241/.minikube/profiles/addons-690916/client.crt: no such file or directory
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-673948" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-673948
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-087557 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-087557

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-087557

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-087557

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-087557

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-087557

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-087557

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-087557

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-087557

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-087557

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-087557

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-087557"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-087557"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-087557"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-087557

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-087557"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-087557"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-087557" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-087557" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-087557" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-087557" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-087557" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-087557" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-087557" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-087557" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-087557"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-087557"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-087557"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-087557"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-087557"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-087557" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-087557" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-087557" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-087557"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-087557"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-087557"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-087557"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-087557"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-087557

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-087557"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-087557"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-087557"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-087557"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-087557"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-087557"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-087557"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-087557"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-087557"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-087557"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-087557"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-087557"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-087557"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-087557"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-087557"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-087557"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-087557"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-087557"

                                                
                                                
----------------------- debugLogs end: kubenet-087557 [took: 4.203224747s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-087557" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-087557
--- SKIP: TestNetworkPlugins/group/kubenet (4.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-087557 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-087557

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-087557

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-087557

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-087557

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-087557

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-087557

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-087557

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-087557

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-087557

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-087557

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-087557"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-087557"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-087557"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-087557

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-087557"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-087557"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-087557" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-087557" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-087557" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-087557" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-087557" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-087557" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-087557" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-087557" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-087557"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-087557"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-087557"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-087557"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-087557"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-087557

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-087557

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-087557" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-087557" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-087557

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-087557

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-087557" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-087557" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-087557" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-087557" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-087557" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-087557"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-087557"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-087557"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-087557"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-087557"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-087557

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-087557"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-087557"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-087557"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-087557"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-087557"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-087557"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-087557"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-087557"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-087557"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-087557"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-087557"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-087557"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-087557"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-087557"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-087557"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-087557"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-087557"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-087557" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-087557"

                                                
                                                
----------------------- debugLogs end: cilium-087557 [took: 4.738139323s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-087557" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-087557
--- SKIP: TestNetworkPlugins/group/cilium (4.93s)

                                                
                                    
Copied to clipboard